00:00:00.001 Started by upstream project "autotest-per-patch" build number 130919 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.101 using credential 00000000-0000-0000-0000-000000000002 00:00:00.103 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.170 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.241 Using shallow fetch with depth 1 00:00:00.241 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.241 > git --version # timeout=10 00:00:00.292 > git --version # 'git version 2.39.2' 00:00:00.292 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.320 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.320 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.366 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.380 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.394 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.394 > git config core.sparsecheckout # timeout=10 00:00:04.407 > git read-tree -mu HEAD # timeout=10 00:00:04.423 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.441 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.441 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:04.525 [Pipeline] Start of Pipeline 00:00:04.538 [Pipeline] library 00:00:04.540 Loading library shm_lib@master 00:00:04.540 Library shm_lib@master is cached. Copying from home. 00:00:04.554 [Pipeline] node 00:00:04.562 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.563 [Pipeline] { 00:00:04.577 [Pipeline] catchError 00:00:04.579 [Pipeline] { 00:00:04.590 [Pipeline] wrap 00:00:04.596 [Pipeline] { 00:00:04.604 [Pipeline] stage 00:00:04.606 [Pipeline] { (Prologue) 00:00:04.835 [Pipeline] sh 00:00:05.119 + logger -p user.info -t JENKINS-CI 00:00:05.136 [Pipeline] echo 00:00:05.138 Node: GP6 00:00:05.143 [Pipeline] sh 00:00:05.444 [Pipeline] setCustomBuildProperty 00:00:05.455 [Pipeline] echo 00:00:05.456 Cleanup processes 00:00:05.461 [Pipeline] sh 00:00:05.745 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.745 1048279 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.758 [Pipeline] sh 00:00:06.042 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.042 ++ grep -v 'sudo pgrep' 00:00:06.042 ++ awk '{print $1}' 00:00:06.042 + sudo kill -9 00:00:06.042 + true 00:00:06.058 [Pipeline] cleanWs 00:00:06.070 [WS-CLEANUP] Deleting project workspace... 00:00:06.070 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.076 [WS-CLEANUP] done 00:00:06.080 [Pipeline] setCustomBuildProperty 00:00:06.095 [Pipeline] sh 00:00:06.377 + sudo git config --global --replace-all safe.directory '*' 00:00:06.484 [Pipeline] httpRequest 00:00:06.994 [Pipeline] echo 00:00:06.996 Sorcerer 10.211.164.101 is alive 00:00:07.006 [Pipeline] retry 00:00:07.007 [Pipeline] { 00:00:07.023 [Pipeline] httpRequest 00:00:07.027 HttpMethod: GET 00:00:07.028 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.028 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.040 Response Code: HTTP/1.1 200 OK 00:00:07.040 Success: Status code 200 is in the accepted range: 200,404 00:00:07.040 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.587 [Pipeline] } 00:00:11.605 [Pipeline] // retry 00:00:11.613 [Pipeline] sh 00:00:11.900 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:11.916 [Pipeline] httpRequest 00:00:12.502 [Pipeline] echo 00:00:12.504 Sorcerer 10.211.164.101 is alive 00:00:12.514 [Pipeline] retry 00:00:12.517 [Pipeline] { 00:00:12.531 [Pipeline] httpRequest 00:00:12.536 HttpMethod: GET 00:00:12.537 URL: http://10.211.164.101/packages/spdk_fdd35b2abb02d20844cc78d002071ba597f8d80c.tar.gz 00:00:12.537 Sending request to url: http://10.211.164.101/packages/spdk_fdd35b2abb02d20844cc78d002071ba597f8d80c.tar.gz 00:00:12.559 Response Code: HTTP/1.1 200 OK 00:00:12.560 Success: Status code 200 is in the accepted range: 200,404 00:00:12.560 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_fdd35b2abb02d20844cc78d002071ba597f8d80c.tar.gz 00:01:53.610 [Pipeline] } 00:01:53.628 [Pipeline] // retry 00:01:53.635 [Pipeline] sh 00:01:53.922 + tar --no-same-owner -xf spdk_fdd35b2abb02d20844cc78d002071ba597f8d80c.tar.gz 00:01:56.467 [Pipeline] sh 00:01:56.755 + git -C spdk log --oneline -n5 00:01:56.755 fdd35b2ab nvme/perf: interrupt mode support for pcie controller 00:01:56.755 6f51f621d bdev/nvme: interrupt mode for PCIe nvme ctrlr 00:01:56.755 865972bb6 nvme: create, manage fd_group for nvme poll group 00:01:56.755 ba5b39cb2 thread: Extended options for spdk_interrupt_register 00:01:56.755 52e9db722 util: allow a fd_group to manage all its fds 00:01:56.767 [Pipeline] } 00:01:56.782 [Pipeline] // stage 00:01:56.792 [Pipeline] stage 00:01:56.795 [Pipeline] { (Prepare) 00:01:56.814 [Pipeline] writeFile 00:01:56.831 [Pipeline] sh 00:01:57.120 + logger -p user.info -t JENKINS-CI 00:01:57.133 [Pipeline] sh 00:01:57.420 + logger -p user.info -t JENKINS-CI 00:01:57.433 [Pipeline] sh 00:01:57.720 + cat autorun-spdk.conf 00:01:57.720 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.720 SPDK_TEST_NVMF=1 00:01:57.720 SPDK_TEST_NVME_CLI=1 00:01:57.720 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.720 SPDK_TEST_NVMF_NICS=e810 00:01:57.720 SPDK_TEST_VFIOUSER=1 00:01:57.720 SPDK_RUN_UBSAN=1 00:01:57.720 NET_TYPE=phy 00:01:57.728 RUN_NIGHTLY=0 00:01:57.733 [Pipeline] readFile 00:01:57.760 [Pipeline] withEnv 00:01:57.763 [Pipeline] { 00:01:57.777 [Pipeline] sh 00:01:58.066 + set -ex 00:01:58.066 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:58.066 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:58.066 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.066 ++ SPDK_TEST_NVMF=1 00:01:58.066 ++ SPDK_TEST_NVME_CLI=1 00:01:58.066 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:58.066 ++ SPDK_TEST_NVMF_NICS=e810 00:01:58.066 ++ SPDK_TEST_VFIOUSER=1 00:01:58.066 ++ SPDK_RUN_UBSAN=1 00:01:58.066 ++ NET_TYPE=phy 00:01:58.066 ++ RUN_NIGHTLY=0 00:01:58.066 + case $SPDK_TEST_NVMF_NICS in 00:01:58.066 + DRIVERS=ice 00:01:58.066 + [[ tcp == \r\d\m\a ]] 00:01:58.066 + [[ -n ice ]] 00:01:58.066 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:58.066 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:58.066 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:58.066 rmmod: ERROR: Module irdma is not currently loaded 00:01:58.066 rmmod: ERROR: Module i40iw is not currently loaded 00:01:58.066 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:58.066 + true 00:01:58.066 + for D in $DRIVERS 00:01:58.066 + sudo modprobe ice 00:01:58.066 + exit 0 00:01:58.076 [Pipeline] } 00:01:58.092 [Pipeline] // withEnv 00:01:58.098 [Pipeline] } 00:01:58.112 [Pipeline] // stage 00:01:58.123 [Pipeline] catchError 00:01:58.125 [Pipeline] { 00:01:58.139 [Pipeline] timeout 00:01:58.139 Timeout set to expire in 1 hr 0 min 00:01:58.141 [Pipeline] { 00:01:58.157 [Pipeline] stage 00:01:58.159 [Pipeline] { (Tests) 00:01:58.174 [Pipeline] sh 00:01:58.462 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:58.462 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:58.462 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:58.462 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:58.462 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:58.462 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:58.462 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:58.462 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:58.462 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:58.462 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:58.462 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:58.462 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:58.462 + source /etc/os-release 00:01:58.462 ++ NAME='Fedora Linux' 00:01:58.462 ++ VERSION='39 (Cloud Edition)' 00:01:58.462 ++ ID=fedora 00:01:58.462 ++ VERSION_ID=39 00:01:58.462 ++ VERSION_CODENAME= 00:01:58.462 ++ PLATFORM_ID=platform:f39 00:01:58.462 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:58.462 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:58.462 ++ LOGO=fedora-logo-icon 00:01:58.462 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:58.462 ++ HOME_URL=https://fedoraproject.org/ 00:01:58.462 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:58.462 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:58.462 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:58.462 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:58.462 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:58.462 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:58.462 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:58.462 ++ SUPPORT_END=2024-11-12 00:01:58.462 ++ VARIANT='Cloud Edition' 00:01:58.462 ++ VARIANT_ID=cloud 00:01:58.462 + uname -a 00:01:58.462 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:58.462 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:59.843 Hugepages 00:01:59.843 node hugesize free / total 00:01:59.843 node0 1048576kB 0 / 0 00:01:59.843 node0 2048kB 0 / 0 00:01:59.843 node1 1048576kB 0 / 0 00:01:59.843 node1 2048kB 0 / 0 00:01:59.843 00:01:59.843 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:59.843 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:59.843 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:59.843 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:59.843 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:59.843 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:59.843 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:59.843 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:59.843 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:59.843 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:59.843 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:59.843 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:59.843 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:59.843 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:59.843 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:59.843 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:59.843 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:59.843 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:59.843 + rm -f /tmp/spdk-ld-path 00:01:59.843 + source autorun-spdk.conf 00:01:59.843 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.843 ++ SPDK_TEST_NVMF=1 00:01:59.843 ++ SPDK_TEST_NVME_CLI=1 00:01:59.843 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:59.843 ++ SPDK_TEST_NVMF_NICS=e810 00:01:59.843 ++ SPDK_TEST_VFIOUSER=1 00:01:59.843 ++ SPDK_RUN_UBSAN=1 00:01:59.843 ++ NET_TYPE=phy 00:01:59.843 ++ RUN_NIGHTLY=0 00:01:59.843 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:59.843 + [[ -n '' ]] 00:01:59.843 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:59.843 + for M in /var/spdk/build-*-manifest.txt 00:01:59.843 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:59.843 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:59.843 + for M in /var/spdk/build-*-manifest.txt 00:01:59.843 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:59.844 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:59.844 + for M in /var/spdk/build-*-manifest.txt 00:01:59.844 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:59.844 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:59.844 ++ uname 00:01:59.844 + [[ Linux == \L\i\n\u\x ]] 00:01:59.844 + sudo dmesg -T 00:01:59.844 + sudo dmesg --clear 00:01:59.844 + dmesg_pid=1049575 00:01:59.844 + [[ Fedora Linux == FreeBSD ]] 00:01:59.844 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:59.844 + sudo dmesg -Tw 00:01:59.844 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:59.844 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:59.844 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:59.844 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:59.844 + [[ -x /usr/src/fio-static/fio ]] 00:01:59.844 + export FIO_BIN=/usr/src/fio-static/fio 00:01:59.844 + FIO_BIN=/usr/src/fio-static/fio 00:01:59.844 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:59.844 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:59.844 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:59.844 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:59.844 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:59.844 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:59.844 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:59.844 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:59.844 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:59.844 Test configuration: 00:01:59.844 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:59.844 SPDK_TEST_NVMF=1 00:01:59.844 SPDK_TEST_NVME_CLI=1 00:01:59.844 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:59.844 SPDK_TEST_NVMF_NICS=e810 00:01:59.844 SPDK_TEST_VFIOUSER=1 00:01:59.844 SPDK_RUN_UBSAN=1 00:01:59.844 NET_TYPE=phy 00:01:59.844 RUN_NIGHTLY=0 18:09:17 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:59.844 18:09:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:59.844 18:09:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:59.844 18:09:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:59.844 18:09:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:59.844 18:09:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:59.844 18:09:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.844 18:09:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.844 18:09:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.844 18:09:17 -- paths/export.sh@5 -- $ export PATH 00:01:59.844 18:09:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.844 18:09:17 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:59.844 18:09:17 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:59.844 18:09:17 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728403757.XXXXXX 00:01:59.844 18:09:17 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728403757.3Gv0qq 00:01:59.844 18:09:17 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:59.844 18:09:17 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:59.844 18:09:17 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:59.844 18:09:17 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:59.844 18:09:17 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:59.844 18:09:17 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:59.844 18:09:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:59.844 18:09:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.844 18:09:17 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:59.844 18:09:17 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:59.844 18:09:17 -- pm/common@17 -- $ local monitor 00:01:59.844 18:09:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:59.844 18:09:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:59.844 18:09:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:59.844 18:09:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:59.844 18:09:17 -- pm/common@21 -- $ date +%s 00:01:59.844 18:09:17 -- pm/common@21 -- $ date +%s 00:01:59.844 18:09:17 -- pm/common@25 -- $ sleep 1 00:01:59.844 18:09:17 -- pm/common@21 -- $ date +%s 00:01:59.844 18:09:17 -- pm/common@21 -- $ date +%s 00:01:59.844 18:09:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403757 00:01:59.844 18:09:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403757 00:01:59.844 18:09:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403757 00:01:59.844 18:09:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728403757 00:01:59.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403757_collect-vmstat.pm.log 00:01:59.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403757_collect-cpu-load.pm.log 00:01:59.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403757_collect-cpu-temp.pm.log 00:01:59.844 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728403757_collect-bmc-pm.bmc.pm.log 00:02:00.829 18:09:18 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:00.829 18:09:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:00.829 18:09:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:00.829 18:09:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:00.829 18:09:18 -- spdk/autobuild.sh@16 -- $ date -u 00:02:00.829 Tue Oct 8 04:09:18 PM UTC 2024 00:02:00.829 18:09:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:00.829 v25.01-pre-54-gfdd35b2ab 00:02:00.829 18:09:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:00.829 18:09:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:00.829 18:09:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:00.829 18:09:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:00.829 18:09:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:00.829 18:09:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.829 ************************************ 00:02:00.829 START TEST ubsan 00:02:00.829 ************************************ 00:02:00.829 18:09:19 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:00.829 using ubsan 00:02:00.829 00:02:00.829 real 0m0.000s 00:02:00.829 user 0m0.000s 00:02:00.829 sys 0m0.000s 00:02:00.829 18:09:19 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:00.829 18:09:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:00.829 ************************************ 00:02:00.829 END TEST ubsan 00:02:00.829 ************************************ 00:02:00.829 18:09:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:00.829 18:09:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:00.829 18:09:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:00.829 18:09:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:00.829 18:09:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:00.829 18:09:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:00.829 18:09:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:00.829 18:09:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:00.829 18:09:19 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:01.122 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:01.122 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:01.382 Using 'verbs' RDMA provider 00:02:11.927 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:21.915 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:22.175 Creating mk/config.mk...done. 00:02:22.175 Creating mk/cc.flags.mk...done. 00:02:22.175 Type 'make' to build. 00:02:22.175 18:09:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:02:22.175 18:09:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:22.175 18:09:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:22.175 18:09:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.175 ************************************ 00:02:22.175 START TEST make 00:02:22.175 ************************************ 00:02:22.175 18:09:40 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:22.434 make[1]: Nothing to be done for 'all'. 00:02:24.354 The Meson build system 00:02:24.354 Version: 1.5.0 00:02:24.354 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:24.355 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:24.355 Build type: native build 00:02:24.355 Project name: libvfio-user 00:02:24.355 Project version: 0.0.1 00:02:24.355 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:24.355 C linker for the host machine: cc ld.bfd 2.40-14 00:02:24.355 Host machine cpu family: x86_64 00:02:24.355 Host machine cpu: x86_64 00:02:24.355 Run-time dependency threads found: YES 00:02:24.355 Library dl found: YES 00:02:24.355 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:24.355 Run-time dependency json-c found: YES 0.17 00:02:24.355 Run-time dependency cmocka found: YES 1.1.7 00:02:24.355 Program pytest-3 found: NO 00:02:24.355 Program flake8 found: NO 00:02:24.355 Program misspell-fixer found: NO 00:02:24.355 Program restructuredtext-lint found: NO 00:02:24.355 Program valgrind found: YES (/usr/bin/valgrind) 00:02:24.355 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:24.355 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:24.355 Compiler for C supports arguments -Wwrite-strings: YES 00:02:24.355 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:24.355 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:24.355 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:24.355 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:24.355 Build targets in project: 8 00:02:24.355 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:24.355 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:24.355 00:02:24.355 libvfio-user 0.0.1 00:02:24.355 00:02:24.355 User defined options 00:02:24.355 buildtype : debug 00:02:24.355 default_library: shared 00:02:24.355 libdir : /usr/local/lib 00:02:24.355 00:02:24.355 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.306 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:25.306 [1/37] Compiling C object samples/null.p/null.c.o 00:02:25.306 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:25.306 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:25.306 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:25.306 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:25.306 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:25.306 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:25.306 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:25.306 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:25.306 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:25.306 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:25.306 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:25.306 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:25.306 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:25.306 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:25.306 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:25.306 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:25.306 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:25.306 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:25.306 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:25.565 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:25.565 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:25.565 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:25.565 [24/37] Compiling C object samples/server.p/server.c.o 00:02:25.565 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:25.565 [26/37] Compiling C object samples/client.p/client.c.o 00:02:25.565 [27/37] Linking target samples/client 00:02:25.565 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:25.565 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:25.565 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:25.825 [31/37] Linking target test/unit_tests 00:02:25.825 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:25.825 [33/37] Linking target samples/null 00:02:25.825 [34/37] Linking target samples/server 00:02:25.825 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:26.085 [36/37] Linking target samples/lspci 00:02:26.085 [37/37] Linking target samples/gpio-pci-idio-16 00:02:26.085 INFO: autodetecting backend as ninja 00:02:26.085 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:26.085 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.028 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:27.028 ninja: no work to do. 00:02:32.298 The Meson build system 00:02:32.298 Version: 1.5.0 00:02:32.298 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:32.298 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:32.298 Build type: native build 00:02:32.298 Program cat found: YES (/usr/bin/cat) 00:02:32.298 Project name: DPDK 00:02:32.298 Project version: 24.03.0 00:02:32.298 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:32.298 C linker for the host machine: cc ld.bfd 2.40-14 00:02:32.298 Host machine cpu family: x86_64 00:02:32.298 Host machine cpu: x86_64 00:02:32.298 Message: ## Building in Developer Mode ## 00:02:32.298 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:32.298 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:32.298 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:32.298 Program python3 found: YES (/usr/bin/python3) 00:02:32.298 Program cat found: YES (/usr/bin/cat) 00:02:32.298 Compiler for C supports arguments -march=native: YES 00:02:32.298 Checking for size of "void *" : 8 00:02:32.298 Checking for size of "void *" : 8 (cached) 00:02:32.298 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:32.298 Library m found: YES 00:02:32.298 Library numa found: YES 00:02:32.298 Has header "numaif.h" : YES 00:02:32.298 Library fdt found: NO 00:02:32.298 Library execinfo found: NO 00:02:32.298 Has header "execinfo.h" : YES 00:02:32.298 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:32.298 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:32.298 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:32.298 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:32.298 Run-time dependency openssl found: YES 3.1.1 00:02:32.298 Run-time dependency libpcap found: YES 1.10.4 00:02:32.298 Has header "pcap.h" with dependency libpcap: YES 00:02:32.298 Compiler for C supports arguments -Wcast-qual: YES 00:02:32.298 Compiler for C supports arguments -Wdeprecated: YES 00:02:32.298 Compiler for C supports arguments -Wformat: YES 00:02:32.298 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:32.298 Compiler for C supports arguments -Wformat-security: NO 00:02:32.298 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:32.298 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:32.298 Compiler for C supports arguments -Wnested-externs: YES 00:02:32.298 Compiler for C supports arguments -Wold-style-definition: YES 00:02:32.298 Compiler for C supports arguments -Wpointer-arith: YES 00:02:32.298 Compiler for C supports arguments -Wsign-compare: YES 00:02:32.298 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:32.298 Compiler for C supports arguments -Wundef: YES 00:02:32.298 Compiler for C supports arguments -Wwrite-strings: YES 00:02:32.298 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:32.298 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:32.298 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:32.298 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:32.298 Program objdump found: YES (/usr/bin/objdump) 00:02:32.298 Compiler for C supports arguments -mavx512f: YES 00:02:32.298 Checking if "AVX512 checking" compiles: YES 00:02:32.298 Fetching value of define "__SSE4_2__" : 1 00:02:32.298 Fetching value of define "__AES__" : 1 00:02:32.298 Fetching value of define "__AVX__" : 1 00:02:32.298 Fetching value of define "__AVX2__" : (undefined) 00:02:32.298 Fetching value of define "__AVX512BW__" : (undefined) 00:02:32.298 Fetching value of define "__AVX512CD__" : (undefined) 00:02:32.298 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:32.298 Fetching value of define "__AVX512F__" : (undefined) 00:02:32.298 Fetching value of define "__AVX512VL__" : (undefined) 00:02:32.298 Fetching value of define "__PCLMUL__" : 1 00:02:32.298 Fetching value of define "__RDRND__" : 1 00:02:32.298 Fetching value of define "__RDSEED__" : (undefined) 00:02:32.298 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:32.299 Fetching value of define "__znver1__" : (undefined) 00:02:32.299 Fetching value of define "__znver2__" : (undefined) 00:02:32.299 Fetching value of define "__znver3__" : (undefined) 00:02:32.299 Fetching value of define "__znver4__" : (undefined) 00:02:32.299 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:32.299 Message: lib/log: Defining dependency "log" 00:02:32.299 Message: lib/kvargs: Defining dependency "kvargs" 00:02:32.299 Message: lib/telemetry: Defining dependency "telemetry" 00:02:32.299 Checking for function "getentropy" : NO 00:02:32.299 Message: lib/eal: Defining dependency "eal" 00:02:32.299 Message: lib/ring: Defining dependency "ring" 00:02:32.299 Message: lib/rcu: Defining dependency "rcu" 00:02:32.299 Message: lib/mempool: Defining dependency "mempool" 00:02:32.299 Message: lib/mbuf: Defining dependency "mbuf" 00:02:32.299 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:32.299 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.299 Compiler for C supports arguments -mpclmul: YES 00:02:32.299 Compiler for C supports arguments -maes: YES 00:02:32.299 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:32.299 Compiler for C supports arguments -mavx512bw: YES 00:02:32.299 Compiler for C supports arguments -mavx512dq: YES 00:02:32.299 Compiler for C supports arguments -mavx512vl: YES 00:02:32.299 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:32.299 Compiler for C supports arguments -mavx2: YES 00:02:32.299 Compiler for C supports arguments -mavx: YES 00:02:32.299 Message: lib/net: Defining dependency "net" 00:02:32.299 Message: lib/meter: Defining dependency "meter" 00:02:32.299 Message: lib/ethdev: Defining dependency "ethdev" 00:02:32.299 Message: lib/pci: Defining dependency "pci" 00:02:32.299 Message: lib/cmdline: Defining dependency "cmdline" 00:02:32.299 Message: lib/hash: Defining dependency "hash" 00:02:32.299 Message: lib/timer: Defining dependency "timer" 00:02:32.299 Message: lib/compressdev: Defining dependency "compressdev" 00:02:32.299 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:32.299 Message: lib/dmadev: Defining dependency "dmadev" 00:02:32.299 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:32.299 Message: lib/power: Defining dependency "power" 00:02:32.299 Message: lib/reorder: Defining dependency "reorder" 00:02:32.299 Message: lib/security: Defining dependency "security" 00:02:32.299 Has header "linux/userfaultfd.h" : YES 00:02:32.299 Has header "linux/vduse.h" : YES 00:02:32.299 Message: lib/vhost: Defining dependency "vhost" 00:02:32.299 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:32.299 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:32.299 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:32.299 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:32.299 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:32.299 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:32.299 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:32.299 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:32.299 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:32.299 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:32.299 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:32.299 Configuring doxy-api-html.conf using configuration 00:02:32.299 Configuring doxy-api-man.conf using configuration 00:02:32.299 Program mandb found: YES (/usr/bin/mandb) 00:02:32.299 Program sphinx-build found: NO 00:02:32.299 Configuring rte_build_config.h using configuration 00:02:32.299 Message: 00:02:32.299 ================= 00:02:32.299 Applications Enabled 00:02:32.299 ================= 00:02:32.299 00:02:32.299 apps: 00:02:32.299 00:02:32.299 00:02:32.299 Message: 00:02:32.299 ================= 00:02:32.299 Libraries Enabled 00:02:32.299 ================= 00:02:32.299 00:02:32.299 libs: 00:02:32.299 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:32.299 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:32.299 cryptodev, dmadev, power, reorder, security, vhost, 00:02:32.299 00:02:32.299 Message: 00:02:32.299 =============== 00:02:32.299 Drivers Enabled 00:02:32.299 =============== 00:02:32.299 00:02:32.299 common: 00:02:32.299 00:02:32.299 bus: 00:02:32.299 pci, vdev, 00:02:32.299 mempool: 00:02:32.299 ring, 00:02:32.299 dma: 00:02:32.299 00:02:32.299 net: 00:02:32.299 00:02:32.299 crypto: 00:02:32.299 00:02:32.299 compress: 00:02:32.299 00:02:32.299 vdpa: 00:02:32.299 00:02:32.299 00:02:32.299 Message: 00:02:32.299 ================= 00:02:32.299 Content Skipped 00:02:32.299 ================= 00:02:32.299 00:02:32.299 apps: 00:02:32.299 dumpcap: explicitly disabled via build config 00:02:32.299 graph: explicitly disabled via build config 00:02:32.299 pdump: explicitly disabled via build config 00:02:32.299 proc-info: explicitly disabled via build config 00:02:32.299 test-acl: explicitly disabled via build config 00:02:32.299 test-bbdev: explicitly disabled via build config 00:02:32.299 test-cmdline: explicitly disabled via build config 00:02:32.299 test-compress-perf: explicitly disabled via build config 00:02:32.299 test-crypto-perf: explicitly disabled via build config 00:02:32.299 test-dma-perf: explicitly disabled via build config 00:02:32.299 test-eventdev: explicitly disabled via build config 00:02:32.299 test-fib: explicitly disabled via build config 00:02:32.299 test-flow-perf: explicitly disabled via build config 00:02:32.299 test-gpudev: explicitly disabled via build config 00:02:32.299 test-mldev: explicitly disabled via build config 00:02:32.299 test-pipeline: explicitly disabled via build config 00:02:32.299 test-pmd: explicitly disabled via build config 00:02:32.299 test-regex: explicitly disabled via build config 00:02:32.299 test-sad: explicitly disabled via build config 00:02:32.299 test-security-perf: explicitly disabled via build config 00:02:32.299 00:02:32.299 libs: 00:02:32.299 argparse: explicitly disabled via build config 00:02:32.299 metrics: explicitly disabled via build config 00:02:32.299 acl: explicitly disabled via build config 00:02:32.299 bbdev: explicitly disabled via build config 00:02:32.299 bitratestats: explicitly disabled via build config 00:02:32.299 bpf: explicitly disabled via build config 00:02:32.299 cfgfile: explicitly disabled via build config 00:02:32.299 distributor: explicitly disabled via build config 00:02:32.299 efd: explicitly disabled via build config 00:02:32.299 eventdev: explicitly disabled via build config 00:02:32.299 dispatcher: explicitly disabled via build config 00:02:32.299 gpudev: explicitly disabled via build config 00:02:32.299 gro: explicitly disabled via build config 00:02:32.299 gso: explicitly disabled via build config 00:02:32.299 ip_frag: explicitly disabled via build config 00:02:32.299 jobstats: explicitly disabled via build config 00:02:32.299 latencystats: explicitly disabled via build config 00:02:32.299 lpm: explicitly disabled via build config 00:02:32.299 member: explicitly disabled via build config 00:02:32.299 pcapng: explicitly disabled via build config 00:02:32.299 rawdev: explicitly disabled via build config 00:02:32.299 regexdev: explicitly disabled via build config 00:02:32.299 mldev: explicitly disabled via build config 00:02:32.299 rib: explicitly disabled via build config 00:02:32.299 sched: explicitly disabled via build config 00:02:32.299 stack: explicitly disabled via build config 00:02:32.299 ipsec: explicitly disabled via build config 00:02:32.299 pdcp: explicitly disabled via build config 00:02:32.299 fib: explicitly disabled via build config 00:02:32.299 port: explicitly disabled via build config 00:02:32.299 pdump: explicitly disabled via build config 00:02:32.299 table: explicitly disabled via build config 00:02:32.299 pipeline: explicitly disabled via build config 00:02:32.299 graph: explicitly disabled via build config 00:02:32.299 node: explicitly disabled via build config 00:02:32.299 00:02:32.299 drivers: 00:02:32.299 common/cpt: not in enabled drivers build config 00:02:32.299 common/dpaax: not in enabled drivers build config 00:02:32.299 common/iavf: not in enabled drivers build config 00:02:32.299 common/idpf: not in enabled drivers build config 00:02:32.299 common/ionic: not in enabled drivers build config 00:02:32.299 common/mvep: not in enabled drivers build config 00:02:32.299 common/octeontx: not in enabled drivers build config 00:02:32.299 bus/auxiliary: not in enabled drivers build config 00:02:32.299 bus/cdx: not in enabled drivers build config 00:02:32.299 bus/dpaa: not in enabled drivers build config 00:02:32.299 bus/fslmc: not in enabled drivers build config 00:02:32.299 bus/ifpga: not in enabled drivers build config 00:02:32.299 bus/platform: not in enabled drivers build config 00:02:32.299 bus/uacce: not in enabled drivers build config 00:02:32.299 bus/vmbus: not in enabled drivers build config 00:02:32.299 common/cnxk: not in enabled drivers build config 00:02:32.299 common/mlx5: not in enabled drivers build config 00:02:32.299 common/nfp: not in enabled drivers build config 00:02:32.299 common/nitrox: not in enabled drivers build config 00:02:32.299 common/qat: not in enabled drivers build config 00:02:32.299 common/sfc_efx: not in enabled drivers build config 00:02:32.299 mempool/bucket: not in enabled drivers build config 00:02:32.299 mempool/cnxk: not in enabled drivers build config 00:02:32.299 mempool/dpaa: not in enabled drivers build config 00:02:32.299 mempool/dpaa2: not in enabled drivers build config 00:02:32.299 mempool/octeontx: not in enabled drivers build config 00:02:32.299 mempool/stack: not in enabled drivers build config 00:02:32.299 dma/cnxk: not in enabled drivers build config 00:02:32.299 dma/dpaa: not in enabled drivers build config 00:02:32.299 dma/dpaa2: not in enabled drivers build config 00:02:32.299 dma/hisilicon: not in enabled drivers build config 00:02:32.299 dma/idxd: not in enabled drivers build config 00:02:32.299 dma/ioat: not in enabled drivers build config 00:02:32.299 dma/skeleton: not in enabled drivers build config 00:02:32.299 net/af_packet: not in enabled drivers build config 00:02:32.299 net/af_xdp: not in enabled drivers build config 00:02:32.299 net/ark: not in enabled drivers build config 00:02:32.299 net/atlantic: not in enabled drivers build config 00:02:32.299 net/avp: not in enabled drivers build config 00:02:32.299 net/axgbe: not in enabled drivers build config 00:02:32.299 net/bnx2x: not in enabled drivers build config 00:02:32.299 net/bnxt: not in enabled drivers build config 00:02:32.299 net/bonding: not in enabled drivers build config 00:02:32.299 net/cnxk: not in enabled drivers build config 00:02:32.299 net/cpfl: not in enabled drivers build config 00:02:32.299 net/cxgbe: not in enabled drivers build config 00:02:32.299 net/dpaa: not in enabled drivers build config 00:02:32.300 net/dpaa2: not in enabled drivers build config 00:02:32.300 net/e1000: not in enabled drivers build config 00:02:32.300 net/ena: not in enabled drivers build config 00:02:32.300 net/enetc: not in enabled drivers build config 00:02:32.300 net/enetfec: not in enabled drivers build config 00:02:32.300 net/enic: not in enabled drivers build config 00:02:32.300 net/failsafe: not in enabled drivers build config 00:02:32.300 net/fm10k: not in enabled drivers build config 00:02:32.300 net/gve: not in enabled drivers build config 00:02:32.300 net/hinic: not in enabled drivers build config 00:02:32.300 net/hns3: not in enabled drivers build config 00:02:32.300 net/i40e: not in enabled drivers build config 00:02:32.300 net/iavf: not in enabled drivers build config 00:02:32.300 net/ice: not in enabled drivers build config 00:02:32.300 net/idpf: not in enabled drivers build config 00:02:32.300 net/igc: not in enabled drivers build config 00:02:32.300 net/ionic: not in enabled drivers build config 00:02:32.300 net/ipn3ke: not in enabled drivers build config 00:02:32.300 net/ixgbe: not in enabled drivers build config 00:02:32.300 net/mana: not in enabled drivers build config 00:02:32.300 net/memif: not in enabled drivers build config 00:02:32.300 net/mlx4: not in enabled drivers build config 00:02:32.300 net/mlx5: not in enabled drivers build config 00:02:32.300 net/mvneta: not in enabled drivers build config 00:02:32.300 net/mvpp2: not in enabled drivers build config 00:02:32.300 net/netvsc: not in enabled drivers build config 00:02:32.300 net/nfb: not in enabled drivers build config 00:02:32.300 net/nfp: not in enabled drivers build config 00:02:32.300 net/ngbe: not in enabled drivers build config 00:02:32.300 net/null: not in enabled drivers build config 00:02:32.300 net/octeontx: not in enabled drivers build config 00:02:32.300 net/octeon_ep: not in enabled drivers build config 00:02:32.300 net/pcap: not in enabled drivers build config 00:02:32.300 net/pfe: not in enabled drivers build config 00:02:32.300 net/qede: not in enabled drivers build config 00:02:32.300 net/ring: not in enabled drivers build config 00:02:32.300 net/sfc: not in enabled drivers build config 00:02:32.300 net/softnic: not in enabled drivers build config 00:02:32.300 net/tap: not in enabled drivers build config 00:02:32.300 net/thunderx: not in enabled drivers build config 00:02:32.300 net/txgbe: not in enabled drivers build config 00:02:32.300 net/vdev_netvsc: not in enabled drivers build config 00:02:32.300 net/vhost: not in enabled drivers build config 00:02:32.300 net/virtio: not in enabled drivers build config 00:02:32.300 net/vmxnet3: not in enabled drivers build config 00:02:32.300 raw/*: missing internal dependency, "rawdev" 00:02:32.300 crypto/armv8: not in enabled drivers build config 00:02:32.300 crypto/bcmfs: not in enabled drivers build config 00:02:32.300 crypto/caam_jr: not in enabled drivers build config 00:02:32.300 crypto/ccp: not in enabled drivers build config 00:02:32.300 crypto/cnxk: not in enabled drivers build config 00:02:32.300 crypto/dpaa_sec: not in enabled drivers build config 00:02:32.300 crypto/dpaa2_sec: not in enabled drivers build config 00:02:32.300 crypto/ipsec_mb: not in enabled drivers build config 00:02:32.300 crypto/mlx5: not in enabled drivers build config 00:02:32.300 crypto/mvsam: not in enabled drivers build config 00:02:32.300 crypto/nitrox: not in enabled drivers build config 00:02:32.300 crypto/null: not in enabled drivers build config 00:02:32.300 crypto/octeontx: not in enabled drivers build config 00:02:32.300 crypto/openssl: not in enabled drivers build config 00:02:32.300 crypto/scheduler: not in enabled drivers build config 00:02:32.300 crypto/uadk: not in enabled drivers build config 00:02:32.300 crypto/virtio: not in enabled drivers build config 00:02:32.300 compress/isal: not in enabled drivers build config 00:02:32.300 compress/mlx5: not in enabled drivers build config 00:02:32.300 compress/nitrox: not in enabled drivers build config 00:02:32.300 compress/octeontx: not in enabled drivers build config 00:02:32.300 compress/zlib: not in enabled drivers build config 00:02:32.300 regex/*: missing internal dependency, "regexdev" 00:02:32.300 ml/*: missing internal dependency, "mldev" 00:02:32.300 vdpa/ifc: not in enabled drivers build config 00:02:32.300 vdpa/mlx5: not in enabled drivers build config 00:02:32.300 vdpa/nfp: not in enabled drivers build config 00:02:32.300 vdpa/sfc: not in enabled drivers build config 00:02:32.300 event/*: missing internal dependency, "eventdev" 00:02:32.300 baseband/*: missing internal dependency, "bbdev" 00:02:32.300 gpu/*: missing internal dependency, "gpudev" 00:02:32.300 00:02:32.300 00:02:32.300 Build targets in project: 85 00:02:32.300 00:02:32.300 DPDK 24.03.0 00:02:32.300 00:02:32.300 User defined options 00:02:32.300 buildtype : debug 00:02:32.300 default_library : shared 00:02:32.300 libdir : lib 00:02:32.300 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:32.300 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:32.300 c_link_args : 00:02:32.300 cpu_instruction_set: native 00:02:32.300 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:32.300 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:32.300 enable_docs : false 00:02:32.300 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:32.300 enable_kmods : false 00:02:32.300 max_lcores : 128 00:02:32.300 tests : false 00:02:32.300 00:02:32.300 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.300 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:32.300 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:32.561 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:32.561 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:32.561 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:32.561 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:32.561 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:32.561 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:32.561 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:32.561 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:32.561 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:32.561 [11/268] Linking static target lib/librte_kvargs.a 00:02:32.561 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:32.561 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:32.561 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:32.561 [15/268] Linking static target lib/librte_log.a 00:02:32.561 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:33.139 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.398 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:33.398 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:33.398 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:33.398 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:33.398 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:33.398 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:33.398 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:33.398 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:33.398 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:33.398 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:33.398 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:33.398 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:33.398 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:33.398 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:33.398 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:33.398 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:33.398 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:33.398 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:33.398 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:33.398 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:33.398 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:33.398 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:33.398 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:33.398 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:33.398 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:33.398 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:33.398 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:33.398 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:33.398 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:33.398 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:33.398 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:33.398 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:33.398 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:33.398 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:33.399 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:33.399 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:33.399 [54/268] Linking static target lib/librte_telemetry.a 00:02:33.399 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:33.658 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:33.658 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:33.658 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:33.658 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:33.658 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:33.658 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:33.658 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:33.658 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:33.658 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.658 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:33.920 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:33.920 [67/268] Linking target lib/librte_log.so.24.1 00:02:33.920 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:33.920 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:33.920 [70/268] Linking static target lib/librte_pci.a 00:02:33.920 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.185 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.185 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:34.185 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:34.185 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.185 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:34.185 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:34.185 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:34.185 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:34.185 [80/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:34.185 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:34.185 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:34.185 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:34.447 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:34.447 [85/268] Linking target lib/librte_kvargs.so.24.1 00:02:34.447 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.447 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.447 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.447 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.447 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:34.447 [91/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:34.447 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:34.447 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:34.447 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.447 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:34.447 [96/268] Linking static target lib/librte_meter.a 00:02:34.447 [97/268] Linking static target lib/librte_ring.a 00:02:34.447 [98/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:34.447 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.447 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.447 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.447 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:34.447 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:34.447 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:34.447 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:34.447 [106/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.447 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:34.447 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:34.715 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:34.715 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:34.715 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:34.715 [112/268] Linking static target lib/librte_mempool.a 00:02:34.715 [113/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.715 [114/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:34.715 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:34.715 [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.715 [117/268] Linking static target lib/librte_eal.a 00:02:34.715 [118/268] Linking static target lib/librte_rcu.a 00:02:34.715 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:34.715 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:34.715 [121/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:34.715 [122/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.715 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:34.715 [124/268] Linking target lib/librte_telemetry.so.24.1 00:02:34.715 [125/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:34.715 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:34.715 [127/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.976 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:34.976 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:34.976 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:34.976 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:34.976 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.976 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:34.976 [134/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:34.976 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:34.976 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:34.976 [137/268] Linking static target lib/librte_net.a 00:02:35.237 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:35.237 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.237 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:35.237 [141/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.237 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:35.237 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.237 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:35.237 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:35.237 [146/268] Linking static target lib/librte_cmdline.a 00:02:35.237 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:35.237 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.497 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.497 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.497 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.497 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.497 [153/268] Linking static target lib/librte_timer.a 00:02:35.497 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.497 [155/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.497 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:35.497 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:35.497 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:35.497 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:35.757 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.757 [161/268] Linking static target lib/librte_dmadev.a 00:02:35.757 [162/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:35.757 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:35.757 [164/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.757 [165/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:35.757 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:35.757 [167/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:35.757 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.757 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.757 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:35.757 [171/268] Linking static target lib/librte_power.a 00:02:36.015 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:36.015 [173/268] Linking static target lib/librte_compressdev.a 00:02:36.015 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:36.015 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:36.015 [176/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.015 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:36.015 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:36.015 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:36.015 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:36.015 [181/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:36.015 [182/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:36.015 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:36.015 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:36.015 [185/268] Linking static target lib/librte_hash.a 00:02:36.015 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:36.015 [187/268] Linking static target lib/librte_reorder.a 00:02:36.015 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:36.273 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:36.273 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:36.273 [191/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.273 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:36.273 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:36.273 [194/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:36.273 [195/268] Linking static target lib/librte_mbuf.a 00:02:36.273 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.273 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:36.273 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.273 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.273 [200/268] Linking static target drivers/librte_bus_vdev.a 00:02:36.273 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:36.273 [202/268] Linking static target lib/librte_security.a 00:02:36.273 [203/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:36.273 [204/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:36.273 [205/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.273 [206/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.532 [207/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.532 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.532 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:36.532 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.532 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.532 [212/268] Linking static target drivers/librte_bus_pci.a 00:02:36.532 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:36.532 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.532 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:36.532 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.532 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.532 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.532 [219/268] Linking static target drivers/librte_mempool_ring.a 00:02:36.790 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.790 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.790 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:36.790 [223/268] Linking static target lib/librte_ethdev.a 00:02:36.790 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.790 [225/268] Linking static target lib/librte_cryptodev.a 00:02:36.790 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.163 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.097 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:40.997 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.997 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.997 [231/268] Linking target lib/librte_eal.so.24.1 00:02:41.256 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.256 [233/268] Linking target lib/librte_ring.so.24.1 00:02:41.256 [234/268] Linking target lib/librte_meter.so.24.1 00:02:41.256 [235/268] Linking target lib/librte_timer.so.24.1 00:02:41.256 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.256 [237/268] Linking target lib/librte_pci.so.24.1 00:02:41.256 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:41.256 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:41.256 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:41.256 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:41.256 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:41.256 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:41.514 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:41.514 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:41.514 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:41.514 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:41.514 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:41.514 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:41.514 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:41.772 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:41.772 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:41.772 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:41.772 [254/268] Linking target lib/librte_net.so.24.1 00:02:41.772 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:41.772 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:41.772 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:42.030 [258/268] Linking target lib/librte_security.so.24.1 00:02:42.030 [259/268] Linking target lib/librte_hash.so.24.1 00:02:42.030 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:42.030 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:42.030 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:42.030 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:42.030 [264/268] Linking target lib/librte_power.so.24.1 00:02:45.312 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:45.312 [266/268] Linking static target lib/librte_vhost.a 00:02:45.877 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.877 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:46.135 INFO: autodetecting backend as ninja 00:02:46.135 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:08.109 CC lib/ut/ut.o 00:03:08.109 CC lib/log/log.o 00:03:08.109 CC lib/log/log_flags.o 00:03:08.109 CC lib/ut_mock/mock.o 00:03:08.109 CC lib/log/log_deprecated.o 00:03:08.109 LIB libspdk_ut.a 00:03:08.109 LIB libspdk_log.a 00:03:08.109 LIB libspdk_ut_mock.a 00:03:08.109 SO libspdk_ut.so.2.0 00:03:08.109 SO libspdk_ut_mock.so.6.0 00:03:08.109 SO libspdk_log.so.7.0 00:03:08.109 SYMLINK libspdk_ut.so 00:03:08.109 SYMLINK libspdk_ut_mock.so 00:03:08.109 SYMLINK libspdk_log.so 00:03:08.109 CC lib/ioat/ioat.o 00:03:08.109 CXX lib/trace_parser/trace.o 00:03:08.109 CC lib/dma/dma.o 00:03:08.109 CC lib/util/base64.o 00:03:08.109 CC lib/util/bit_array.o 00:03:08.109 CC lib/util/cpuset.o 00:03:08.109 CC lib/util/crc16.o 00:03:08.109 CC lib/util/crc32.o 00:03:08.109 CC lib/util/crc32c.o 00:03:08.109 CC lib/util/crc32_ieee.o 00:03:08.109 CC lib/util/crc64.o 00:03:08.109 CC lib/util/dif.o 00:03:08.109 CC lib/util/fd.o 00:03:08.109 CC lib/util/fd_group.o 00:03:08.109 CC lib/util/file.o 00:03:08.109 CC lib/util/hexlify.o 00:03:08.109 CC lib/util/iov.o 00:03:08.109 CC lib/util/math.o 00:03:08.109 CC lib/util/net.o 00:03:08.109 CC lib/util/pipe.o 00:03:08.109 CC lib/util/strerror_tls.o 00:03:08.109 CC lib/util/string.o 00:03:08.109 CC lib/util/uuid.o 00:03:08.109 CC lib/util/xor.o 00:03:08.109 CC lib/util/zipf.o 00:03:08.109 CC lib/util/md5.o 00:03:08.109 CC lib/vfio_user/host/vfio_user_pci.o 00:03:08.109 CC lib/vfio_user/host/vfio_user.o 00:03:08.109 LIB libspdk_dma.a 00:03:08.109 SO libspdk_dma.so.5.0 00:03:08.109 SYMLINK libspdk_dma.so 00:03:08.110 LIB libspdk_ioat.a 00:03:08.110 SO libspdk_ioat.so.7.0 00:03:08.110 SYMLINK libspdk_ioat.so 00:03:08.110 LIB libspdk_vfio_user.a 00:03:08.110 SO libspdk_vfio_user.so.5.0 00:03:08.110 SYMLINK libspdk_vfio_user.so 00:03:08.110 LIB libspdk_util.a 00:03:08.110 SO libspdk_util.so.10.1 00:03:08.110 SYMLINK libspdk_util.so 00:03:08.110 CC lib/rdma_utils/rdma_utils.o 00:03:08.110 CC lib/json/json_parse.o 00:03:08.110 CC lib/conf/conf.o 00:03:08.110 CC lib/idxd/idxd.o 00:03:08.110 CC lib/rdma_provider/common.o 00:03:08.110 CC lib/env_dpdk/env.o 00:03:08.110 CC lib/json/json_util.o 00:03:08.110 CC lib/idxd/idxd_user.o 00:03:08.110 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:08.110 CC lib/json/json_write.o 00:03:08.110 CC lib/env_dpdk/memory.o 00:03:08.110 CC lib/idxd/idxd_kernel.o 00:03:08.110 CC lib/env_dpdk/pci.o 00:03:08.110 CC lib/vmd/vmd.o 00:03:08.110 CC lib/env_dpdk/init.o 00:03:08.110 CC lib/env_dpdk/threads.o 00:03:08.110 CC lib/vmd/led.o 00:03:08.110 CC lib/env_dpdk/pci_ioat.o 00:03:08.110 CC lib/env_dpdk/pci_virtio.o 00:03:08.110 CC lib/env_dpdk/pci_vmd.o 00:03:08.110 CC lib/env_dpdk/pci_idxd.o 00:03:08.110 CC lib/env_dpdk/pci_event.o 00:03:08.110 CC lib/env_dpdk/sigbus_handler.o 00:03:08.110 CC lib/env_dpdk/pci_dpdk.o 00:03:08.110 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:08.110 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:08.110 LIB libspdk_trace_parser.a 00:03:08.110 SO libspdk_trace_parser.so.6.0 00:03:08.110 LIB libspdk_conf.a 00:03:08.110 SYMLINK libspdk_trace_parser.so 00:03:08.110 SO libspdk_conf.so.6.0 00:03:08.110 LIB libspdk_rdma_provider.a 00:03:08.110 LIB libspdk_json.a 00:03:08.110 SYMLINK libspdk_conf.so 00:03:08.110 SO libspdk_rdma_provider.so.6.0 00:03:08.110 SO libspdk_json.so.6.0 00:03:08.110 SYMLINK libspdk_rdma_provider.so 00:03:08.110 SYMLINK libspdk_json.so 00:03:08.110 LIB libspdk_rdma_utils.a 00:03:08.110 SO libspdk_rdma_utils.so.1.0 00:03:08.110 SYMLINK libspdk_rdma_utils.so 00:03:08.110 CC lib/jsonrpc/jsonrpc_server.o 00:03:08.110 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:08.110 CC lib/jsonrpc/jsonrpc_client.o 00:03:08.110 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:08.110 LIB libspdk_idxd.a 00:03:08.110 LIB libspdk_vmd.a 00:03:08.110 SO libspdk_vmd.so.6.0 00:03:08.110 SO libspdk_idxd.so.12.1 00:03:08.110 SYMLINK libspdk_vmd.so 00:03:08.110 SYMLINK libspdk_idxd.so 00:03:08.110 LIB libspdk_jsonrpc.a 00:03:08.110 SO libspdk_jsonrpc.so.6.0 00:03:08.110 SYMLINK libspdk_jsonrpc.so 00:03:08.110 CC lib/rpc/rpc.o 00:03:08.110 LIB libspdk_rpc.a 00:03:08.110 SO libspdk_rpc.so.6.0 00:03:08.110 SYMLINK libspdk_rpc.so 00:03:08.110 CC lib/trace/trace.o 00:03:08.110 CC lib/trace/trace_flags.o 00:03:08.110 CC lib/trace/trace_rpc.o 00:03:08.110 CC lib/keyring/keyring.o 00:03:08.110 CC lib/notify/notify.o 00:03:08.110 CC lib/notify/notify_rpc.o 00:03:08.110 CC lib/keyring/keyring_rpc.o 00:03:08.110 LIB libspdk_notify.a 00:03:08.110 SO libspdk_notify.so.6.0 00:03:08.110 SYMLINK libspdk_notify.so 00:03:08.110 LIB libspdk_keyring.a 00:03:08.110 LIB libspdk_trace.a 00:03:08.110 SO libspdk_keyring.so.2.0 00:03:08.368 SO libspdk_trace.so.11.0 00:03:08.368 SYMLINK libspdk_keyring.so 00:03:08.368 SYMLINK libspdk_trace.so 00:03:08.368 LIB libspdk_env_dpdk.a 00:03:08.368 CC lib/sock/sock.o 00:03:08.368 CC lib/sock/sock_rpc.o 00:03:08.368 CC lib/thread/thread.o 00:03:08.368 CC lib/thread/iobuf.o 00:03:08.368 SO libspdk_env_dpdk.so.15.1 00:03:08.626 SYMLINK libspdk_env_dpdk.so 00:03:08.884 LIB libspdk_sock.a 00:03:08.884 SO libspdk_sock.so.10.0 00:03:08.884 SYMLINK libspdk_sock.so 00:03:09.143 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:09.143 CC lib/nvme/nvme_ctrlr.o 00:03:09.143 CC lib/nvme/nvme_fabric.o 00:03:09.143 CC lib/nvme/nvme_ns.o 00:03:09.143 CC lib/nvme/nvme_ns_cmd.o 00:03:09.143 CC lib/nvme/nvme_pcie_common.o 00:03:09.143 CC lib/nvme/nvme_pcie.o 00:03:09.143 CC lib/nvme/nvme_qpair.o 00:03:09.143 CC lib/nvme/nvme.o 00:03:09.143 CC lib/nvme/nvme_quirks.o 00:03:09.143 CC lib/nvme/nvme_transport.o 00:03:09.143 CC lib/nvme/nvme_discovery.o 00:03:09.143 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:09.143 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:09.143 CC lib/nvme/nvme_tcp.o 00:03:09.143 CC lib/nvme/nvme_opal.o 00:03:09.143 CC lib/nvme/nvme_io_msg.o 00:03:09.143 CC lib/nvme/nvme_poll_group.o 00:03:09.143 CC lib/nvme/nvme_zns.o 00:03:09.143 CC lib/nvme/nvme_stubs.o 00:03:09.143 CC lib/nvme/nvme_auth.o 00:03:09.143 CC lib/nvme/nvme_cuse.o 00:03:09.143 CC lib/nvme/nvme_vfio_user.o 00:03:09.143 CC lib/nvme/nvme_rdma.o 00:03:10.126 LIB libspdk_thread.a 00:03:10.126 SO libspdk_thread.so.10.2 00:03:10.126 SYMLINK libspdk_thread.so 00:03:10.419 CC lib/accel/accel.o 00:03:10.419 CC lib/fsdev/fsdev.o 00:03:10.419 CC lib/virtio/virtio.o 00:03:10.419 CC lib/accel/accel_rpc.o 00:03:10.419 CC lib/virtio/virtio_vhost_user.o 00:03:10.419 CC lib/fsdev/fsdev_io.o 00:03:10.419 CC lib/virtio/virtio_vfio_user.o 00:03:10.419 CC lib/accel/accel_sw.o 00:03:10.419 CC lib/fsdev/fsdev_rpc.o 00:03:10.419 CC lib/virtio/virtio_pci.o 00:03:10.419 CC lib/init/json_config.o 00:03:10.419 CC lib/blob/blobstore.o 00:03:10.419 CC lib/vfu_tgt/tgt_endpoint.o 00:03:10.419 CC lib/init/subsystem.o 00:03:10.419 CC lib/vfu_tgt/tgt_rpc.o 00:03:10.419 CC lib/blob/request.o 00:03:10.419 CC lib/init/subsystem_rpc.o 00:03:10.419 CC lib/init/rpc.o 00:03:10.419 CC lib/blob/zeroes.o 00:03:10.419 CC lib/blob/blob_bs_dev.o 00:03:10.722 LIB libspdk_init.a 00:03:10.722 SO libspdk_init.so.6.0 00:03:10.722 LIB libspdk_virtio.a 00:03:10.722 LIB libspdk_vfu_tgt.a 00:03:10.722 SYMLINK libspdk_init.so 00:03:10.722 SO libspdk_vfu_tgt.so.3.0 00:03:10.722 SO libspdk_virtio.so.7.0 00:03:10.722 SYMLINK libspdk_vfu_tgt.so 00:03:10.722 SYMLINK libspdk_virtio.so 00:03:10.978 CC lib/event/app.o 00:03:10.978 CC lib/event/reactor.o 00:03:10.978 CC lib/event/log_rpc.o 00:03:10.978 CC lib/event/app_rpc.o 00:03:10.978 CC lib/event/scheduler_static.o 00:03:10.978 LIB libspdk_fsdev.a 00:03:10.978 SO libspdk_fsdev.so.1.0 00:03:11.236 SYMLINK libspdk_fsdev.so 00:03:11.236 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:11.236 LIB libspdk_event.a 00:03:11.236 SO libspdk_event.so.15.0 00:03:11.494 SYMLINK libspdk_event.so 00:03:11.494 LIB libspdk_nvme.a 00:03:11.494 LIB libspdk_accel.a 00:03:11.494 SO libspdk_accel.so.16.0 00:03:11.753 SYMLINK libspdk_accel.so 00:03:11.753 SO libspdk_nvme.so.15.0 00:03:11.753 CC lib/bdev/bdev.o 00:03:11.753 CC lib/bdev/bdev_rpc.o 00:03:11.753 CC lib/bdev/bdev_zone.o 00:03:11.753 CC lib/bdev/part.o 00:03:11.753 CC lib/bdev/scsi_nvme.o 00:03:12.010 SYMLINK libspdk_nvme.so 00:03:12.010 LIB libspdk_fuse_dispatcher.a 00:03:12.010 SO libspdk_fuse_dispatcher.so.1.0 00:03:12.010 SYMLINK libspdk_fuse_dispatcher.so 00:03:13.384 LIB libspdk_blob.a 00:03:13.384 SO libspdk_blob.so.11.0 00:03:13.641 SYMLINK libspdk_blob.so 00:03:13.641 CC lib/blobfs/tree.o 00:03:13.641 CC lib/blobfs/blobfs.o 00:03:13.641 CC lib/lvol/lvol.o 00:03:14.575 LIB libspdk_bdev.a 00:03:14.575 SO libspdk_bdev.so.17.0 00:03:14.575 SYMLINK libspdk_bdev.so 00:03:14.575 LIB libspdk_blobfs.a 00:03:14.575 SO libspdk_blobfs.so.10.0 00:03:14.575 SYMLINK libspdk_blobfs.so 00:03:14.575 LIB libspdk_lvol.a 00:03:14.841 SO libspdk_lvol.so.10.0 00:03:14.841 CC lib/nbd/nbd.o 00:03:14.841 CC lib/scsi/dev.o 00:03:14.841 CC lib/ublk/ublk.o 00:03:14.841 CC lib/nbd/nbd_rpc.o 00:03:14.841 CC lib/scsi/lun.o 00:03:14.841 CC lib/ublk/ublk_rpc.o 00:03:14.841 CC lib/nvmf/ctrlr.o 00:03:14.841 CC lib/scsi/port.o 00:03:14.841 CC lib/ftl/ftl_core.o 00:03:14.841 CC lib/nvmf/ctrlr_discovery.o 00:03:14.841 CC lib/scsi/scsi.o 00:03:14.841 CC lib/ftl/ftl_init.o 00:03:14.841 CC lib/nvmf/ctrlr_bdev.o 00:03:14.841 CC lib/scsi/scsi_bdev.o 00:03:14.841 CC lib/ftl/ftl_layout.o 00:03:14.841 CC lib/nvmf/subsystem.o 00:03:14.841 CC lib/scsi/scsi_pr.o 00:03:14.841 CC lib/nvmf/nvmf.o 00:03:14.841 CC lib/ftl/ftl_debug.o 00:03:14.841 CC lib/scsi/scsi_rpc.o 00:03:14.841 CC lib/ftl/ftl_io.o 00:03:14.841 CC lib/nvmf/nvmf_rpc.o 00:03:14.841 CC lib/nvmf/tcp.o 00:03:14.841 CC lib/nvmf/transport.o 00:03:14.841 CC lib/scsi/task.o 00:03:14.841 CC lib/ftl/ftl_sb.o 00:03:14.841 CC lib/nvmf/stubs.o 00:03:14.841 CC lib/nvmf/mdns_server.o 00:03:14.841 CC lib/ftl/ftl_l2p.o 00:03:14.841 CC lib/ftl/ftl_l2p_flat.o 00:03:14.841 CC lib/ftl/ftl_nv_cache.o 00:03:14.841 CC lib/nvmf/vfio_user.o 00:03:14.841 CC lib/nvmf/rdma.o 00:03:14.841 CC lib/nvmf/auth.o 00:03:14.841 CC lib/ftl/ftl_band.o 00:03:14.841 CC lib/ftl/ftl_band_ops.o 00:03:14.841 CC lib/ftl/ftl_rq.o 00:03:14.841 CC lib/ftl/ftl_writer.o 00:03:14.841 CC lib/ftl/ftl_reloc.o 00:03:14.841 CC lib/ftl/ftl_l2p_cache.o 00:03:14.841 CC lib/ftl/ftl_p2l.o 00:03:14.841 CC lib/ftl/ftl_p2l_log.o 00:03:14.841 CC lib/ftl/mngt/ftl_mngt.o 00:03:14.841 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:14.841 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:14.841 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:14.841 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:14.841 SYMLINK libspdk_lvol.so 00:03:14.841 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:15.099 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:15.099 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:15.099 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:15.099 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:15.099 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:15.099 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:15.099 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:15.099 CC lib/ftl/utils/ftl_conf.o 00:03:15.099 CC lib/ftl/utils/ftl_md.o 00:03:15.099 CC lib/ftl/utils/ftl_mempool.o 00:03:15.099 CC lib/ftl/utils/ftl_bitmap.o 00:03:15.099 CC lib/ftl/utils/ftl_property.o 00:03:15.099 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:15.361 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:15.361 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:15.361 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:15.361 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:15.361 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:15.361 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:15.361 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:15.361 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:15.361 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:15.361 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:15.361 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:15.361 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:15.361 CC lib/ftl/base/ftl_base_dev.o 00:03:15.620 CC lib/ftl/base/ftl_base_bdev.o 00:03:15.620 CC lib/ftl/ftl_trace.o 00:03:15.620 LIB libspdk_nbd.a 00:03:15.620 SO libspdk_nbd.so.7.0 00:03:15.620 LIB libspdk_scsi.a 00:03:15.620 SYMLINK libspdk_nbd.so 00:03:15.877 SO libspdk_scsi.so.9.0 00:03:15.877 SYMLINK libspdk_scsi.so 00:03:15.877 LIB libspdk_ublk.a 00:03:15.877 SO libspdk_ublk.so.3.0 00:03:15.877 CC lib/vhost/vhost.o 00:03:15.877 CC lib/vhost/vhost_rpc.o 00:03:15.877 CC lib/vhost/vhost_scsi.o 00:03:15.877 CC lib/vhost/vhost_blk.o 00:03:15.877 CC lib/vhost/rte_vhost_user.o 00:03:15.877 CC lib/iscsi/conn.o 00:03:15.877 SYMLINK libspdk_ublk.so 00:03:15.877 CC lib/iscsi/init_grp.o 00:03:15.877 CC lib/iscsi/iscsi.o 00:03:15.877 CC lib/iscsi/param.o 00:03:15.877 CC lib/iscsi/portal_grp.o 00:03:15.877 CC lib/iscsi/tgt_node.o 00:03:15.877 CC lib/iscsi/iscsi_subsystem.o 00:03:15.877 CC lib/iscsi/iscsi_rpc.o 00:03:15.877 CC lib/iscsi/task.o 00:03:16.135 LIB libspdk_ftl.a 00:03:16.394 SO libspdk_ftl.so.9.0 00:03:16.652 SYMLINK libspdk_ftl.so 00:03:17.217 LIB libspdk_vhost.a 00:03:17.217 SO libspdk_vhost.so.8.0 00:03:17.217 SYMLINK libspdk_vhost.so 00:03:17.474 LIB libspdk_nvmf.a 00:03:17.474 LIB libspdk_iscsi.a 00:03:17.474 SO libspdk_nvmf.so.19.0 00:03:17.474 SO libspdk_iscsi.so.8.0 00:03:17.731 SYMLINK libspdk_iscsi.so 00:03:17.731 SYMLINK libspdk_nvmf.so 00:03:17.989 CC module/env_dpdk/env_dpdk_rpc.o 00:03:17.989 CC module/vfu_device/vfu_virtio.o 00:03:17.989 CC module/vfu_device/vfu_virtio_blk.o 00:03:17.989 CC module/vfu_device/vfu_virtio_scsi.o 00:03:17.989 CC module/vfu_device/vfu_virtio_rpc.o 00:03:17.989 CC module/vfu_device/vfu_virtio_fs.o 00:03:17.989 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:17.989 CC module/scheduler/gscheduler/gscheduler.o 00:03:17.989 CC module/sock/posix/posix.o 00:03:17.989 CC module/keyring/file/keyring.o 00:03:17.989 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:17.989 CC module/fsdev/aio/fsdev_aio.o 00:03:17.989 CC module/accel/error/accel_error.o 00:03:17.989 CC module/keyring/file/keyring_rpc.o 00:03:17.989 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:17.989 CC module/accel/iaa/accel_iaa.o 00:03:17.989 CC module/accel/error/accel_error_rpc.o 00:03:17.989 CC module/accel/ioat/accel_ioat.o 00:03:17.989 CC module/fsdev/aio/linux_aio_mgr.o 00:03:17.989 CC module/accel/ioat/accel_ioat_rpc.o 00:03:17.989 CC module/accel/iaa/accel_iaa_rpc.o 00:03:17.989 CC module/blob/bdev/blob_bdev.o 00:03:17.989 CC module/accel/dsa/accel_dsa.o 00:03:17.989 CC module/keyring/linux/keyring.o 00:03:17.989 CC module/accel/dsa/accel_dsa_rpc.o 00:03:17.989 CC module/keyring/linux/keyring_rpc.o 00:03:17.989 LIB libspdk_env_dpdk_rpc.a 00:03:18.246 SO libspdk_env_dpdk_rpc.so.6.0 00:03:18.246 LIB libspdk_scheduler_gscheduler.a 00:03:18.246 LIB libspdk_keyring_linux.a 00:03:18.246 SYMLINK libspdk_env_dpdk_rpc.so 00:03:18.246 LIB libspdk_keyring_file.a 00:03:18.246 SO libspdk_scheduler_gscheduler.so.4.0 00:03:18.246 SO libspdk_keyring_linux.so.1.0 00:03:18.246 SO libspdk_keyring_file.so.2.0 00:03:18.246 LIB libspdk_accel_ioat.a 00:03:18.246 LIB libspdk_accel_iaa.a 00:03:18.246 SYMLINK libspdk_scheduler_gscheduler.so 00:03:18.246 SYMLINK libspdk_keyring_linux.so 00:03:18.246 SO libspdk_accel_ioat.so.6.0 00:03:18.246 SYMLINK libspdk_keyring_file.so 00:03:18.246 SO libspdk_accel_iaa.so.3.0 00:03:18.246 LIB libspdk_scheduler_dpdk_governor.a 00:03:18.246 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:18.246 LIB libspdk_accel_error.a 00:03:18.246 SYMLINK libspdk_accel_ioat.so 00:03:18.246 LIB libspdk_scheduler_dynamic.a 00:03:18.246 SYMLINK libspdk_accel_iaa.so 00:03:18.246 SO libspdk_accel_error.so.2.0 00:03:18.246 LIB libspdk_blob_bdev.a 00:03:18.246 LIB libspdk_accel_dsa.a 00:03:18.246 SO libspdk_scheduler_dynamic.so.4.0 00:03:18.246 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:18.246 SO libspdk_blob_bdev.so.11.0 00:03:18.506 SO libspdk_accel_dsa.so.5.0 00:03:18.506 SYMLINK libspdk_accel_error.so 00:03:18.506 SYMLINK libspdk_scheduler_dynamic.so 00:03:18.506 SYMLINK libspdk_blob_bdev.so 00:03:18.506 SYMLINK libspdk_accel_dsa.so 00:03:18.506 LIB libspdk_vfu_device.a 00:03:18.765 SO libspdk_vfu_device.so.3.0 00:03:18.765 CC module/bdev/error/vbdev_error.o 00:03:18.765 CC module/bdev/lvol/vbdev_lvol.o 00:03:18.765 CC module/bdev/null/bdev_null.o 00:03:18.765 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:18.765 CC module/bdev/error/vbdev_error_rpc.o 00:03:18.765 CC module/bdev/null/bdev_null_rpc.o 00:03:18.765 CC module/bdev/malloc/bdev_malloc.o 00:03:18.765 CC module/bdev/delay/vbdev_delay.o 00:03:18.765 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:18.765 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:18.765 CC module/bdev/gpt/gpt.o 00:03:18.765 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:18.765 CC module/bdev/gpt/vbdev_gpt.o 00:03:18.765 CC module/blobfs/bdev/blobfs_bdev.o 00:03:18.765 CC module/bdev/nvme/bdev_nvme.o 00:03:18.765 CC module/bdev/split/vbdev_split.o 00:03:18.765 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:18.765 CC module/bdev/passthru/vbdev_passthru.o 00:03:18.765 CC module/bdev/raid/bdev_raid_rpc.o 00:03:18.765 CC module/bdev/raid/bdev_raid.o 00:03:18.765 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:18.765 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:18.765 CC module/bdev/split/vbdev_split_rpc.o 00:03:18.766 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:18.766 CC module/bdev/aio/bdev_aio.o 00:03:18.766 CC module/bdev/nvme/nvme_rpc.o 00:03:18.766 CC module/bdev/raid/bdev_raid_sb.o 00:03:18.766 CC module/bdev/aio/bdev_aio_rpc.o 00:03:18.766 CC module/bdev/nvme/bdev_mdns_client.o 00:03:18.766 CC module/bdev/raid/raid0.o 00:03:18.766 CC module/bdev/ftl/bdev_ftl.o 00:03:18.766 CC module/bdev/nvme/vbdev_opal.o 00:03:18.766 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:18.766 CC module/bdev/raid/raid1.o 00:03:18.766 CC module/bdev/iscsi/bdev_iscsi.o 00:03:18.766 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:18.766 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:18.766 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:18.766 CC module/bdev/raid/concat.o 00:03:18.766 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:18.766 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:18.766 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:18.766 SYMLINK libspdk_vfu_device.so 00:03:18.766 LIB libspdk_fsdev_aio.a 00:03:19.024 SO libspdk_fsdev_aio.so.1.0 00:03:19.024 LIB libspdk_sock_posix.a 00:03:19.024 SO libspdk_sock_posix.so.6.0 00:03:19.024 SYMLINK libspdk_fsdev_aio.so 00:03:19.024 LIB libspdk_blobfs_bdev.a 00:03:19.024 SO libspdk_blobfs_bdev.so.6.0 00:03:19.024 SYMLINK libspdk_sock_posix.so 00:03:19.024 LIB libspdk_bdev_split.a 00:03:19.024 SO libspdk_bdev_split.so.6.0 00:03:19.283 SYMLINK libspdk_blobfs_bdev.so 00:03:19.283 LIB libspdk_bdev_null.a 00:03:19.283 LIB libspdk_bdev_gpt.a 00:03:19.283 LIB libspdk_bdev_zone_block.a 00:03:19.283 SO libspdk_bdev_null.so.6.0 00:03:19.283 LIB libspdk_bdev_error.a 00:03:19.283 SO libspdk_bdev_gpt.so.6.0 00:03:19.283 SYMLINK libspdk_bdev_split.so 00:03:19.283 LIB libspdk_bdev_passthru.a 00:03:19.283 SO libspdk_bdev_zone_block.so.6.0 00:03:19.283 SO libspdk_bdev_error.so.6.0 00:03:19.283 LIB libspdk_bdev_ftl.a 00:03:19.283 SO libspdk_bdev_passthru.so.6.0 00:03:19.283 SYMLINK libspdk_bdev_null.so 00:03:19.283 SYMLINK libspdk_bdev_gpt.so 00:03:19.283 SO libspdk_bdev_ftl.so.6.0 00:03:19.283 SYMLINK libspdk_bdev_zone_block.so 00:03:19.283 SYMLINK libspdk_bdev_error.so 00:03:19.283 LIB libspdk_bdev_aio.a 00:03:19.283 SYMLINK libspdk_bdev_passthru.so 00:03:19.283 SO libspdk_bdev_aio.so.6.0 00:03:19.283 LIB libspdk_bdev_malloc.a 00:03:19.283 SYMLINK libspdk_bdev_ftl.so 00:03:19.283 LIB libspdk_bdev_delay.a 00:03:19.283 SO libspdk_bdev_malloc.so.6.0 00:03:19.283 LIB libspdk_bdev_iscsi.a 00:03:19.283 SO libspdk_bdev_delay.so.6.0 00:03:19.283 SYMLINK libspdk_bdev_aio.so 00:03:19.283 SO libspdk_bdev_iscsi.so.6.0 00:03:19.283 SYMLINK libspdk_bdev_malloc.so 00:03:19.541 SYMLINK libspdk_bdev_delay.so 00:03:19.541 LIB libspdk_bdev_lvol.a 00:03:19.541 SYMLINK libspdk_bdev_iscsi.so 00:03:19.541 SO libspdk_bdev_lvol.so.6.0 00:03:19.541 SYMLINK libspdk_bdev_lvol.so 00:03:19.541 LIB libspdk_bdev_virtio.a 00:03:19.541 SO libspdk_bdev_virtio.so.6.0 00:03:19.541 SYMLINK libspdk_bdev_virtio.so 00:03:20.107 LIB libspdk_bdev_raid.a 00:03:20.107 SO libspdk_bdev_raid.so.6.0 00:03:20.107 SYMLINK libspdk_bdev_raid.so 00:03:21.480 LIB libspdk_bdev_nvme.a 00:03:21.480 SO libspdk_bdev_nvme.so.7.0 00:03:21.480 SYMLINK libspdk_bdev_nvme.so 00:03:21.738 CC module/event/subsystems/vmd/vmd.o 00:03:21.738 CC module/event/subsystems/iobuf/iobuf.o 00:03:21.738 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:21.738 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:21.738 CC module/event/subsystems/fsdev/fsdev.o 00:03:21.738 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:21.738 CC module/event/subsystems/keyring/keyring.o 00:03:21.738 CC module/event/subsystems/scheduler/scheduler.o 00:03:21.738 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:21.738 CC module/event/subsystems/sock/sock.o 00:03:21.997 LIB libspdk_event_vhost_blk.a 00:03:21.997 LIB libspdk_event_keyring.a 00:03:21.997 LIB libspdk_event_scheduler.a 00:03:21.997 LIB libspdk_event_fsdev.a 00:03:21.997 LIB libspdk_event_vfu_tgt.a 00:03:21.997 LIB libspdk_event_vmd.a 00:03:21.997 LIB libspdk_event_sock.a 00:03:21.997 LIB libspdk_event_iobuf.a 00:03:21.997 SO libspdk_event_vhost_blk.so.3.0 00:03:21.997 SO libspdk_event_keyring.so.1.0 00:03:21.997 SO libspdk_event_scheduler.so.4.0 00:03:21.997 SO libspdk_event_fsdev.so.1.0 00:03:21.997 SO libspdk_event_vfu_tgt.so.3.0 00:03:21.997 SO libspdk_event_sock.so.5.0 00:03:21.997 SO libspdk_event_vmd.so.6.0 00:03:21.997 SO libspdk_event_iobuf.so.3.0 00:03:21.997 SYMLINK libspdk_event_keyring.so 00:03:21.997 SYMLINK libspdk_event_vhost_blk.so 00:03:21.997 SYMLINK libspdk_event_scheduler.so 00:03:21.997 SYMLINK libspdk_event_fsdev.so 00:03:21.997 SYMLINK libspdk_event_vfu_tgt.so 00:03:21.997 SYMLINK libspdk_event_sock.so 00:03:21.997 SYMLINK libspdk_event_vmd.so 00:03:21.997 SYMLINK libspdk_event_iobuf.so 00:03:22.256 CC module/event/subsystems/accel/accel.o 00:03:22.256 LIB libspdk_event_accel.a 00:03:22.256 SO libspdk_event_accel.so.6.0 00:03:22.516 SYMLINK libspdk_event_accel.so 00:03:22.516 CC module/event/subsystems/bdev/bdev.o 00:03:22.774 LIB libspdk_event_bdev.a 00:03:22.774 SO libspdk_event_bdev.so.6.0 00:03:22.774 SYMLINK libspdk_event_bdev.so 00:03:23.032 CC module/event/subsystems/scsi/scsi.o 00:03:23.032 CC module/event/subsystems/nbd/nbd.o 00:03:23.032 CC module/event/subsystems/ublk/ublk.o 00:03:23.032 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:23.032 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:23.032 LIB libspdk_event_nbd.a 00:03:23.032 LIB libspdk_event_ublk.a 00:03:23.033 LIB libspdk_event_scsi.a 00:03:23.033 SO libspdk_event_nbd.so.6.0 00:03:23.033 SO libspdk_event_ublk.so.3.0 00:03:23.291 SO libspdk_event_scsi.so.6.0 00:03:23.291 SYMLINK libspdk_event_nbd.so 00:03:23.291 SYMLINK libspdk_event_ublk.so 00:03:23.291 SYMLINK libspdk_event_scsi.so 00:03:23.291 LIB libspdk_event_nvmf.a 00:03:23.291 SO libspdk_event_nvmf.so.6.0 00:03:23.291 SYMLINK libspdk_event_nvmf.so 00:03:23.291 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:23.291 CC module/event/subsystems/iscsi/iscsi.o 00:03:23.550 LIB libspdk_event_vhost_scsi.a 00:03:23.550 SO libspdk_event_vhost_scsi.so.3.0 00:03:23.550 LIB libspdk_event_iscsi.a 00:03:23.550 SO libspdk_event_iscsi.so.6.0 00:03:23.550 SYMLINK libspdk_event_vhost_scsi.so 00:03:23.550 SYMLINK libspdk_event_iscsi.so 00:03:23.809 SO libspdk.so.6.0 00:03:23.809 SYMLINK libspdk.so 00:03:23.809 CC app/trace_record/trace_record.o 00:03:23.809 CXX app/trace/trace.o 00:03:23.809 CC app/spdk_top/spdk_top.o 00:03:23.809 CC app/spdk_nvme_perf/perf.o 00:03:23.809 CC app/spdk_nvme_discover/discovery_aer.o 00:03:23.809 CC test/rpc_client/rpc_client_test.o 00:03:23.809 CC app/spdk_nvme_identify/identify.o 00:03:23.809 CC app/spdk_lspci/spdk_lspci.o 00:03:23.809 TEST_HEADER include/spdk/accel.h 00:03:23.809 TEST_HEADER include/spdk/accel_module.h 00:03:23.809 TEST_HEADER include/spdk/assert.h 00:03:23.809 TEST_HEADER include/spdk/barrier.h 00:03:23.809 TEST_HEADER include/spdk/base64.h 00:03:23.809 TEST_HEADER include/spdk/bdev.h 00:03:23.809 TEST_HEADER include/spdk/bdev_module.h 00:03:23.809 TEST_HEADER include/spdk/bit_array.h 00:03:23.809 TEST_HEADER include/spdk/bdev_zone.h 00:03:23.809 TEST_HEADER include/spdk/bit_pool.h 00:03:23.809 TEST_HEADER include/spdk/blob_bdev.h 00:03:23.809 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:23.809 TEST_HEADER include/spdk/blobfs.h 00:03:23.809 TEST_HEADER include/spdk/conf.h 00:03:23.809 TEST_HEADER include/spdk/blob.h 00:03:23.809 TEST_HEADER include/spdk/config.h 00:03:23.809 TEST_HEADER include/spdk/cpuset.h 00:03:23.809 TEST_HEADER include/spdk/crc16.h 00:03:23.809 TEST_HEADER include/spdk/crc32.h 00:03:23.809 TEST_HEADER include/spdk/crc64.h 00:03:23.809 TEST_HEADER include/spdk/dif.h 00:03:23.809 TEST_HEADER include/spdk/dma.h 00:03:23.809 TEST_HEADER include/spdk/endian.h 00:03:23.809 TEST_HEADER include/spdk/env_dpdk.h 00:03:23.809 TEST_HEADER include/spdk/env.h 00:03:23.809 TEST_HEADER include/spdk/event.h 00:03:23.809 TEST_HEADER include/spdk/fd_group.h 00:03:23.809 TEST_HEADER include/spdk/fd.h 00:03:23.810 TEST_HEADER include/spdk/file.h 00:03:23.810 TEST_HEADER include/spdk/fsdev.h 00:03:23.810 TEST_HEADER include/spdk/fsdev_module.h 00:03:23.810 TEST_HEADER include/spdk/ftl.h 00:03:23.810 TEST_HEADER include/spdk/gpt_spec.h 00:03:23.810 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:23.810 TEST_HEADER include/spdk/hexlify.h 00:03:24.075 TEST_HEADER include/spdk/histogram_data.h 00:03:24.075 TEST_HEADER include/spdk/idxd_spec.h 00:03:24.075 TEST_HEADER include/spdk/idxd.h 00:03:24.075 TEST_HEADER include/spdk/init.h 00:03:24.075 TEST_HEADER include/spdk/ioat_spec.h 00:03:24.075 TEST_HEADER include/spdk/ioat.h 00:03:24.075 TEST_HEADER include/spdk/iscsi_spec.h 00:03:24.075 TEST_HEADER include/spdk/json.h 00:03:24.075 TEST_HEADER include/spdk/jsonrpc.h 00:03:24.075 TEST_HEADER include/spdk/keyring.h 00:03:24.075 TEST_HEADER include/spdk/keyring_module.h 00:03:24.075 TEST_HEADER include/spdk/likely.h 00:03:24.075 TEST_HEADER include/spdk/log.h 00:03:24.075 TEST_HEADER include/spdk/lvol.h 00:03:24.075 TEST_HEADER include/spdk/md5.h 00:03:24.075 TEST_HEADER include/spdk/memory.h 00:03:24.075 TEST_HEADER include/spdk/mmio.h 00:03:24.075 TEST_HEADER include/spdk/nbd.h 00:03:24.075 TEST_HEADER include/spdk/net.h 00:03:24.075 TEST_HEADER include/spdk/notify.h 00:03:24.075 TEST_HEADER include/spdk/nvme_intel.h 00:03:24.075 TEST_HEADER include/spdk/nvme.h 00:03:24.075 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:24.075 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:24.075 TEST_HEADER include/spdk/nvme_spec.h 00:03:24.075 TEST_HEADER include/spdk/nvme_zns.h 00:03:24.075 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:24.075 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:24.075 TEST_HEADER include/spdk/nvmf.h 00:03:24.075 TEST_HEADER include/spdk/nvmf_transport.h 00:03:24.075 TEST_HEADER include/spdk/nvmf_spec.h 00:03:24.075 TEST_HEADER include/spdk/opal.h 00:03:24.075 TEST_HEADER include/spdk/opal_spec.h 00:03:24.075 TEST_HEADER include/spdk/pci_ids.h 00:03:24.075 TEST_HEADER include/spdk/pipe.h 00:03:24.075 TEST_HEADER include/spdk/reduce.h 00:03:24.075 TEST_HEADER include/spdk/queue.h 00:03:24.075 TEST_HEADER include/spdk/rpc.h 00:03:24.075 TEST_HEADER include/spdk/scsi.h 00:03:24.075 TEST_HEADER include/spdk/scheduler.h 00:03:24.075 TEST_HEADER include/spdk/scsi_spec.h 00:03:24.075 TEST_HEADER include/spdk/sock.h 00:03:24.075 TEST_HEADER include/spdk/string.h 00:03:24.075 TEST_HEADER include/spdk/stdinc.h 00:03:24.075 TEST_HEADER include/spdk/trace.h 00:03:24.075 TEST_HEADER include/spdk/thread.h 00:03:24.075 TEST_HEADER include/spdk/trace_parser.h 00:03:24.075 TEST_HEADER include/spdk/tree.h 00:03:24.075 TEST_HEADER include/spdk/ublk.h 00:03:24.075 TEST_HEADER include/spdk/util.h 00:03:24.075 TEST_HEADER include/spdk/uuid.h 00:03:24.075 TEST_HEADER include/spdk/version.h 00:03:24.075 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:24.075 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:24.075 TEST_HEADER include/spdk/vmd.h 00:03:24.075 TEST_HEADER include/spdk/vhost.h 00:03:24.075 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:24.075 TEST_HEADER include/spdk/zipf.h 00:03:24.075 TEST_HEADER include/spdk/xor.h 00:03:24.075 CXX test/cpp_headers/accel.o 00:03:24.075 CXX test/cpp_headers/accel_module.o 00:03:24.075 CXX test/cpp_headers/assert.o 00:03:24.075 CXX test/cpp_headers/barrier.o 00:03:24.075 CXX test/cpp_headers/base64.o 00:03:24.075 CXX test/cpp_headers/bdev.o 00:03:24.075 CXX test/cpp_headers/bdev_module.o 00:03:24.075 CXX test/cpp_headers/bdev_zone.o 00:03:24.075 CC app/spdk_dd/spdk_dd.o 00:03:24.075 CXX test/cpp_headers/bit_array.o 00:03:24.075 CXX test/cpp_headers/bit_pool.o 00:03:24.075 CXX test/cpp_headers/blob_bdev.o 00:03:24.075 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.075 CXX test/cpp_headers/blobfs.o 00:03:24.075 CXX test/cpp_headers/blob.o 00:03:24.075 CXX test/cpp_headers/conf.o 00:03:24.075 CXX test/cpp_headers/config.o 00:03:24.075 CXX test/cpp_headers/cpuset.o 00:03:24.075 CXX test/cpp_headers/crc16.o 00:03:24.075 CC app/nvmf_tgt/nvmf_main.o 00:03:24.075 CC app/iscsi_tgt/iscsi_tgt.o 00:03:24.075 CXX test/cpp_headers/crc32.o 00:03:24.075 CC examples/ioat/verify/verify.o 00:03:24.075 CC examples/ioat/perf/perf.o 00:03:24.075 CC app/spdk_tgt/spdk_tgt.o 00:03:24.075 CC examples/util/zipf/zipf.o 00:03:24.075 CC test/thread/poller_perf/poller_perf.o 00:03:24.075 CC test/app/stub/stub.o 00:03:24.075 CC test/app/histogram_perf/histogram_perf.o 00:03:24.075 CC test/app/jsoncat/jsoncat.o 00:03:24.075 CC test/env/pci/pci_ut.o 00:03:24.075 CC test/env/vtophys/vtophys.o 00:03:24.075 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.075 CC test/env/memory/memory_ut.o 00:03:24.075 CC app/fio/nvme/fio_plugin.o 00:03:24.075 CC test/dma/test_dma/test_dma.o 00:03:24.075 CC test/app/bdev_svc/bdev_svc.o 00:03:24.075 CC app/fio/bdev/fio_plugin.o 00:03:24.335 LINK spdk_lspci 00:03:24.335 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:24.335 CC test/env/mem_callbacks/mem_callbacks.o 00:03:24.335 LINK rpc_client_test 00:03:24.335 LINK spdk_nvme_discover 00:03:24.335 LINK poller_perf 00:03:24.335 LINK jsoncat 00:03:24.335 LINK histogram_perf 00:03:24.335 LINK vtophys 00:03:24.335 LINK zipf 00:03:24.335 CXX test/cpp_headers/dif.o 00:03:24.335 CXX test/cpp_headers/crc64.o 00:03:24.335 LINK interrupt_tgt 00:03:24.335 CXX test/cpp_headers/dma.o 00:03:24.335 CXX test/cpp_headers/endian.o 00:03:24.335 CXX test/cpp_headers/env_dpdk.o 00:03:24.335 CXX test/cpp_headers/env.o 00:03:24.335 LINK nvmf_tgt 00:03:24.335 LINK spdk_trace_record 00:03:24.335 CXX test/cpp_headers/event.o 00:03:24.335 CXX test/cpp_headers/fd_group.o 00:03:24.335 LINK stub 00:03:24.597 LINK env_dpdk_post_init 00:03:24.597 CXX test/cpp_headers/fd.o 00:03:24.597 CXX test/cpp_headers/file.o 00:03:24.597 CXX test/cpp_headers/fsdev.o 00:03:24.597 LINK iscsi_tgt 00:03:24.597 LINK verify 00:03:24.597 CXX test/cpp_headers/fsdev_module.o 00:03:24.597 LINK spdk_tgt 00:03:24.597 CXX test/cpp_headers/ftl.o 00:03:24.597 CXX test/cpp_headers/fuse_dispatcher.o 00:03:24.597 CXX test/cpp_headers/gpt_spec.o 00:03:24.597 LINK ioat_perf 00:03:24.597 CXX test/cpp_headers/hexlify.o 00:03:24.597 CXX test/cpp_headers/histogram_data.o 00:03:24.597 LINK bdev_svc 00:03:24.597 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:24.597 CXX test/cpp_headers/idxd.o 00:03:24.597 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:24.597 CXX test/cpp_headers/idxd_spec.o 00:03:24.597 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.597 CXX test/cpp_headers/init.o 00:03:24.860 CXX test/cpp_headers/ioat.o 00:03:24.860 CXX test/cpp_headers/ioat_spec.o 00:03:24.860 CXX test/cpp_headers/iscsi_spec.o 00:03:24.860 LINK spdk_dd 00:03:24.860 CXX test/cpp_headers/json.o 00:03:24.860 LINK spdk_trace 00:03:24.860 CXX test/cpp_headers/jsonrpc.o 00:03:24.860 CXX test/cpp_headers/keyring.o 00:03:24.860 CXX test/cpp_headers/keyring_module.o 00:03:24.860 CXX test/cpp_headers/likely.o 00:03:24.860 CXX test/cpp_headers/log.o 00:03:24.860 CXX test/cpp_headers/lvol.o 00:03:24.860 CXX test/cpp_headers/md5.o 00:03:24.860 CXX test/cpp_headers/memory.o 00:03:24.860 CXX test/cpp_headers/mmio.o 00:03:24.860 CXX test/cpp_headers/nbd.o 00:03:24.860 CXX test/cpp_headers/net.o 00:03:24.860 LINK pci_ut 00:03:24.860 CXX test/cpp_headers/notify.o 00:03:24.860 CXX test/cpp_headers/nvme.o 00:03:24.860 CXX test/cpp_headers/nvme_intel.o 00:03:24.860 CXX test/cpp_headers/nvme_ocssd.o 00:03:24.860 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.123 CXX test/cpp_headers/nvme_spec.o 00:03:25.123 CXX test/cpp_headers/nvme_zns.o 00:03:25.123 CXX test/cpp_headers/nvmf_cmd.o 00:03:25.123 CC test/event/event_perf/event_perf.o 00:03:25.123 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:25.123 CC test/event/reactor/reactor.o 00:03:25.123 LINK nvme_fuzz 00:03:25.123 CC test/event/reactor_perf/reactor_perf.o 00:03:25.123 CC examples/sock/hello_world/hello_sock.o 00:03:25.123 CC examples/vmd/lsvmd/lsvmd.o 00:03:25.123 CXX test/cpp_headers/nvmf.o 00:03:25.123 CXX test/cpp_headers/nvmf_spec.o 00:03:25.123 CC examples/vmd/led/led.o 00:03:25.123 LINK test_dma 00:03:25.123 CXX test/cpp_headers/nvmf_transport.o 00:03:25.123 CXX test/cpp_headers/opal.o 00:03:25.123 LINK spdk_nvme 00:03:25.123 CC test/event/app_repeat/app_repeat.o 00:03:25.123 CXX test/cpp_headers/opal_spec.o 00:03:25.123 CXX test/cpp_headers/pci_ids.o 00:03:25.123 CC examples/idxd/perf/perf.o 00:03:25.123 CC examples/thread/thread/thread_ex.o 00:03:25.123 CXX test/cpp_headers/pipe.o 00:03:25.389 LINK spdk_bdev 00:03:25.389 CXX test/cpp_headers/queue.o 00:03:25.389 CC test/event/scheduler/scheduler.o 00:03:25.389 CXX test/cpp_headers/reduce.o 00:03:25.389 CXX test/cpp_headers/rpc.o 00:03:25.389 CXX test/cpp_headers/scheduler.o 00:03:25.389 CXX test/cpp_headers/scsi.o 00:03:25.389 CXX test/cpp_headers/scsi_spec.o 00:03:25.389 CXX test/cpp_headers/sock.o 00:03:25.389 CXX test/cpp_headers/stdinc.o 00:03:25.389 CXX test/cpp_headers/string.o 00:03:25.389 CXX test/cpp_headers/thread.o 00:03:25.389 CXX test/cpp_headers/trace.o 00:03:25.389 CXX test/cpp_headers/trace_parser.o 00:03:25.389 CXX test/cpp_headers/tree.o 00:03:25.389 LINK event_perf 00:03:25.389 CXX test/cpp_headers/ublk.o 00:03:25.389 CXX test/cpp_headers/util.o 00:03:25.389 LINK reactor 00:03:25.389 CXX test/cpp_headers/uuid.o 00:03:25.389 CC app/vhost/vhost.o 00:03:25.389 LINK lsvmd 00:03:25.389 CXX test/cpp_headers/version.o 00:03:25.389 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.389 LINK reactor_perf 00:03:25.389 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.389 CXX test/cpp_headers/vhost.o 00:03:25.389 LINK mem_callbacks 00:03:25.389 CXX test/cpp_headers/vmd.o 00:03:25.389 CXX test/cpp_headers/xor.o 00:03:25.389 LINK led 00:03:25.389 CXX test/cpp_headers/zipf.o 00:03:25.389 LINK app_repeat 00:03:25.648 LINK vhost_fuzz 00:03:25.648 LINK spdk_nvme_perf 00:03:25.648 LINK spdk_top 00:03:25.648 LINK hello_sock 00:03:25.648 LINK spdk_nvme_identify 00:03:25.648 LINK thread 00:03:25.648 LINK scheduler 00:03:25.907 LINK vhost 00:03:25.907 LINK idxd_perf 00:03:25.907 CC test/nvme/e2edp/nvme_dp.o 00:03:25.907 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.907 CC test/nvme/fdp/fdp.o 00:03:25.907 CC test/nvme/reset/reset.o 00:03:25.907 CC test/nvme/aer/aer.o 00:03:25.907 CC test/nvme/cuse/cuse.o 00:03:25.907 CC test/nvme/reserve/reserve.o 00:03:25.907 CC test/nvme/compliance/nvme_compliance.o 00:03:25.907 CC test/nvme/sgl/sgl.o 00:03:25.907 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.907 CC test/nvme/err_injection/err_injection.o 00:03:25.907 CC test/nvme/simple_copy/simple_copy.o 00:03:25.907 CC test/nvme/startup/startup.o 00:03:25.907 CC test/nvme/connect_stress/connect_stress.o 00:03:25.907 CC test/nvme/overhead/overhead.o 00:03:25.907 CC test/nvme/boot_partition/boot_partition.o 00:03:25.907 CC test/blobfs/mkfs/mkfs.o 00:03:25.907 CC test/accel/dif/dif.o 00:03:25.907 CC test/lvol/esnap/esnap.o 00:03:26.165 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:26.165 CC examples/nvme/hello_world/hello_world.o 00:03:26.165 CC examples/nvme/reconnect/reconnect.o 00:03:26.165 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.165 CC examples/nvme/arbitration/arbitration.o 00:03:26.165 CC examples/nvme/hotplug/hotplug.o 00:03:26.165 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.165 CC examples/nvme/abort/abort.o 00:03:26.165 LINK boot_partition 00:03:26.165 LINK startup 00:03:26.165 LINK connect_stress 00:03:26.165 LINK doorbell_aers 00:03:26.165 LINK mkfs 00:03:26.165 LINK simple_copy 00:03:26.165 LINK reserve 00:03:26.165 LINK err_injection 00:03:26.165 LINK aer 00:03:26.165 CC examples/accel/perf/accel_perf.o 00:03:26.165 LINK memory_ut 00:03:26.165 LINK fused_ordering 00:03:26.165 LINK overhead 00:03:26.165 LINK sgl 00:03:26.165 LINK nvme_compliance 00:03:26.165 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:26.423 CC examples/blob/cli/blobcli.o 00:03:26.423 CC examples/blob/hello_world/hello_blob.o 00:03:26.423 LINK cmb_copy 00:03:26.423 LINK reset 00:03:26.423 LINK nvme_dp 00:03:26.423 LINK hello_world 00:03:26.423 LINK pmr_persistence 00:03:26.423 LINK hotplug 00:03:26.423 LINK fdp 00:03:26.423 LINK arbitration 00:03:26.423 LINK abort 00:03:26.681 LINK reconnect 00:03:26.681 LINK hello_blob 00:03:26.681 LINK hello_fsdev 00:03:26.681 LINK nvme_manage 00:03:26.681 LINK accel_perf 00:03:26.939 LINK dif 00:03:26.939 LINK blobcli 00:03:27.197 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.197 CC examples/bdev/bdevperf/bdevperf.o 00:03:27.197 LINK iscsi_fuzz 00:03:27.197 CC test/bdev/bdevio/bdevio.o 00:03:27.456 LINK cuse 00:03:27.456 LINK hello_bdev 00:03:27.714 LINK bdevio 00:03:27.972 LINK bdevperf 00:03:28.229 CC examples/nvmf/nvmf/nvmf.o 00:03:28.793 LINK nvmf 00:03:31.328 LINK esnap 00:03:31.328 00:03:31.328 real 1m9.209s 00:03:31.328 user 11m51.122s 00:03:31.328 sys 2m38.157s 00:03:31.328 18:10:49 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:31.328 18:10:49 make -- common/autotest_common.sh@10 -- $ set +x 00:03:31.328 ************************************ 00:03:31.328 END TEST make 00:03:31.328 ************************************ 00:03:31.328 18:10:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:31.328 18:10:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:31.328 18:10:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:31.328 18:10:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.328 18:10:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:31.328 18:10:49 -- pm/common@44 -- $ pid=1049606 00:03:31.328 18:10:49 -- pm/common@50 -- $ kill -TERM 1049606 00:03:31.328 18:10:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.328 18:10:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:31.328 18:10:49 -- pm/common@44 -- $ pid=1049608 00:03:31.328 18:10:49 -- pm/common@50 -- $ kill -TERM 1049608 00:03:31.328 18:10:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.328 18:10:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:31.328 18:10:49 -- pm/common@44 -- $ pid=1049610 00:03:31.328 18:10:49 -- pm/common@50 -- $ kill -TERM 1049610 00:03:31.328 18:10:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.328 18:10:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:31.328 18:10:49 -- pm/common@44 -- $ pid=1049641 00:03:31.328 18:10:49 -- pm/common@50 -- $ sudo -E kill -TERM 1049641 00:03:31.586 18:10:49 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:31.586 18:10:49 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:31.586 18:10:49 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:31.586 18:10:49 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:31.586 18:10:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.586 18:10:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.586 18:10:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.586 18:10:49 -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.586 18:10:49 -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.586 18:10:49 -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.586 18:10:49 -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.586 18:10:49 -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.586 18:10:49 -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.586 18:10:49 -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.586 18:10:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.586 18:10:49 -- scripts/common.sh@344 -- # case "$op" in 00:03:31.586 18:10:49 -- scripts/common.sh@345 -- # : 1 00:03:31.586 18:10:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.586 18:10:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.586 18:10:49 -- scripts/common.sh@365 -- # decimal 1 00:03:31.586 18:10:49 -- scripts/common.sh@353 -- # local d=1 00:03:31.586 18:10:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.586 18:10:49 -- scripts/common.sh@355 -- # echo 1 00:03:31.586 18:10:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.586 18:10:49 -- scripts/common.sh@366 -- # decimal 2 00:03:31.586 18:10:49 -- scripts/common.sh@353 -- # local d=2 00:03:31.586 18:10:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.586 18:10:49 -- scripts/common.sh@355 -- # echo 2 00:03:31.586 18:10:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.586 18:10:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.586 18:10:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.586 18:10:49 -- scripts/common.sh@368 -- # return 0 00:03:31.586 18:10:49 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.586 18:10:49 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:31.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.586 --rc genhtml_branch_coverage=1 00:03:31.587 --rc genhtml_function_coverage=1 00:03:31.587 --rc genhtml_legend=1 00:03:31.587 --rc geninfo_all_blocks=1 00:03:31.587 --rc geninfo_unexecuted_blocks=1 00:03:31.587 00:03:31.587 ' 00:03:31.587 18:10:49 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:31.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.587 --rc genhtml_branch_coverage=1 00:03:31.587 --rc genhtml_function_coverage=1 00:03:31.587 --rc genhtml_legend=1 00:03:31.587 --rc geninfo_all_blocks=1 00:03:31.587 --rc geninfo_unexecuted_blocks=1 00:03:31.587 00:03:31.587 ' 00:03:31.587 18:10:49 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:31.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.587 --rc genhtml_branch_coverage=1 00:03:31.587 --rc genhtml_function_coverage=1 00:03:31.587 --rc genhtml_legend=1 00:03:31.587 --rc geninfo_all_blocks=1 00:03:31.587 --rc geninfo_unexecuted_blocks=1 00:03:31.587 00:03:31.587 ' 00:03:31.587 18:10:49 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:31.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.587 --rc genhtml_branch_coverage=1 00:03:31.587 --rc genhtml_function_coverage=1 00:03:31.587 --rc genhtml_legend=1 00:03:31.587 --rc geninfo_all_blocks=1 00:03:31.587 --rc geninfo_unexecuted_blocks=1 00:03:31.587 00:03:31.587 ' 00:03:31.587 18:10:49 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:31.587 18:10:49 -- nvmf/common.sh@7 -- # uname -s 00:03:31.587 18:10:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:31.587 18:10:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:31.587 18:10:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:31.587 18:10:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:31.587 18:10:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:31.587 18:10:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:31.587 18:10:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:31.587 18:10:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:31.587 18:10:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:31.587 18:10:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:31.587 18:10:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:31.587 18:10:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:31.587 18:10:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:31.587 18:10:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:31.587 18:10:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:31.587 18:10:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:31.587 18:10:49 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:31.587 18:10:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:31.587 18:10:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:31.587 18:10:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.587 18:10:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.587 18:10:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.587 18:10:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.587 18:10:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.587 18:10:49 -- paths/export.sh@5 -- # export PATH 00:03:31.587 18:10:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.587 18:10:49 -- nvmf/common.sh@51 -- # : 0 00:03:31.587 18:10:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:31.587 18:10:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:31.587 18:10:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:31.587 18:10:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:31.587 18:10:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:31.587 18:10:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:31.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:31.587 18:10:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:31.587 18:10:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:31.587 18:10:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:31.587 18:10:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:31.587 18:10:49 -- spdk/autotest.sh@32 -- # uname -s 00:03:31.587 18:10:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:31.587 18:10:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:31.587 18:10:49 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:31.587 18:10:49 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:31.587 18:10:49 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:31.587 18:10:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:31.587 18:10:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:31.587 18:10:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:31.587 18:10:49 -- spdk/autotest.sh@48 -- # udevadm_pid=1108927 00:03:31.587 18:10:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:31.587 18:10:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:31.587 18:10:49 -- pm/common@17 -- # local monitor 00:03:31.587 18:10:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.587 18:10:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.587 18:10:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.587 18:10:49 -- pm/common@21 -- # date +%s 00:03:31.587 18:10:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.587 18:10:49 -- pm/common@21 -- # date +%s 00:03:31.587 18:10:49 -- pm/common@25 -- # sleep 1 00:03:31.587 18:10:49 -- pm/common@21 -- # date +%s 00:03:31.587 18:10:49 -- pm/common@21 -- # date +%s 00:03:31.587 18:10:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403849 00:03:31.587 18:10:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403849 00:03:31.587 18:10:49 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403849 00:03:31.587 18:10:49 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728403849 00:03:31.587 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403849_collect-cpu-load.pm.log 00:03:31.587 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403849_collect-vmstat.pm.log 00:03:31.587 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403849_collect-cpu-temp.pm.log 00:03:31.587 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728403849_collect-bmc-pm.bmc.pm.log 00:03:32.524 18:10:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:32.524 18:10:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:32.524 18:10:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:32.524 18:10:50 -- common/autotest_common.sh@10 -- # set +x 00:03:32.524 18:10:50 -- spdk/autotest.sh@59 -- # create_test_list 00:03:32.524 18:10:50 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:32.525 18:10:50 -- common/autotest_common.sh@10 -- # set +x 00:03:32.525 18:10:50 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:32.525 18:10:50 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:32.525 18:10:50 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:32.525 18:10:50 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:32.525 18:10:50 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:32.525 18:10:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:32.525 18:10:50 -- common/autotest_common.sh@1455 -- # uname 00:03:32.525 18:10:50 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:32.525 18:10:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:32.525 18:10:50 -- common/autotest_common.sh@1475 -- # uname 00:03:32.783 18:10:50 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:32.783 18:10:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:32.783 18:10:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:32.783 lcov: LCOV version 1.15 00:03:32.783 18:10:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:54.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:54.763 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:12.916 18:11:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:12.916 18:11:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:12.916 18:11:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.916 18:11:28 -- spdk/autotest.sh@78 -- # rm -f 00:04:12.916 18:11:28 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.916 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:12.916 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:12.916 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:12.916 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:12.916 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:12.916 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:12.916 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:12.916 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:12.916 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:04:12.916 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:12.916 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:12.916 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:12.917 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:12.917 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:12.917 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:12.917 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:12.917 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:12.917 18:11:30 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:12.917 18:11:30 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:12.917 18:11:30 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:12.917 18:11:30 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:12.917 18:11:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:12.917 18:11:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:12.917 18:11:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:12.917 18:11:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.917 18:11:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:12.917 18:11:30 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:12.917 18:11:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.917 18:11:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.917 18:11:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:12.917 18:11:30 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:12.917 18:11:30 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:12.917 No valid GPT data, bailing 00:04:12.917 18:11:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.917 18:11:30 -- scripts/common.sh@394 -- # pt= 00:04:12.917 18:11:30 -- scripts/common.sh@395 -- # return 1 00:04:12.917 18:11:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:12.917 1+0 records in 00:04:12.917 1+0 records out 00:04:12.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00235886 s, 445 MB/s 00:04:12.917 18:11:30 -- spdk/autotest.sh@105 -- # sync 00:04:12.917 18:11:30 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.917 18:11:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.917 18:11:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:14.292 18:11:32 -- spdk/autotest.sh@111 -- # uname -s 00:04:14.292 18:11:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:14.292 18:11:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:14.292 18:11:32 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:15.228 Hugepages 00:04:15.228 node hugesize free / total 00:04:15.228 node0 1048576kB 0 / 0 00:04:15.228 node0 2048kB 0 / 0 00:04:15.228 node1 1048576kB 0 / 0 00:04:15.228 node1 2048kB 0 / 0 00:04:15.228 00:04:15.228 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.228 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:15.228 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:15.228 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:15.228 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:15.228 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:15.228 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:15.228 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:15.228 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:15.228 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:15.228 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:15.228 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:15.228 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:15.228 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:15.228 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:15.228 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:15.487 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:15.487 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:15.487 18:11:33 -- spdk/autotest.sh@117 -- # uname -s 00:04:15.487 18:11:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:15.487 18:11:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:15.487 18:11:33 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.421 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:16.682 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:16.682 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:16.682 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:16.682 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:16.682 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:16.682 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:16.682 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:16.682 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:16.682 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:16.682 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:16.682 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:16.682 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:16.682 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:16.682 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:16.682 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:17.620 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:17.620 18:11:35 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:19.004 18:11:36 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:19.004 18:11:36 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:19.004 18:11:36 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:19.004 18:11:36 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:19.004 18:11:36 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:19.004 18:11:36 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:19.004 18:11:36 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.004 18:11:36 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:19.004 18:11:36 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:19.004 18:11:36 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:19.004 18:11:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:04:19.004 18:11:36 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.939 Waiting for block devices as requested 00:04:19.939 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:20.200 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:20.200 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:20.200 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:20.200 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:20.460 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:20.460 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:20.460 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:20.719 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:04:20.719 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:20.719 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:20.979 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:20.979 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:20.979 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:21.237 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:21.237 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:21.237 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:21.495 18:11:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:21.495 18:11:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:04:21.495 18:11:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:21.495 18:11:39 -- common/autotest_common.sh@1485 -- # grep 0000:0b:00.0/nvme/nvme 00:04:21.495 18:11:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:21.495 18:11:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:04:21.495 18:11:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:21.495 18:11:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:21.495 18:11:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:21.495 18:11:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:21.495 18:11:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:21.495 18:11:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:21.495 18:11:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:21.495 18:11:39 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:04:21.495 18:11:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:21.495 18:11:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:21.495 18:11:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:21.495 18:11:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:21.495 18:11:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:21.495 18:11:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:21.495 18:11:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:21.495 18:11:39 -- common/autotest_common.sh@1541 -- # continue 00:04:21.495 18:11:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:21.495 18:11:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:21.495 18:11:39 -- common/autotest_common.sh@10 -- # set +x 00:04:21.495 18:11:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:21.495 18:11:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:21.495 18:11:39 -- common/autotest_common.sh@10 -- # set +x 00:04:21.495 18:11:39 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.871 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:22.871 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:22.871 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:22.871 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:22.871 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:22.871 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:22.871 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:22.871 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:22.871 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:22.871 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:22.871 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:22.871 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:22.871 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:22.871 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:22.871 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:22.871 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:23.808 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:24.066 18:11:42 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:24.066 18:11:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.066 18:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:24.066 18:11:42 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:24.066 18:11:42 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:24.066 18:11:42 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.066 18:11:42 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:24.066 18:11:42 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:24.066 18:11:42 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:24.066 18:11:42 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:24.066 18:11:42 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:24.066 18:11:42 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:24.066 18:11:42 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:24.066 18:11:42 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.066 18:11:42 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:24.066 18:11:42 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:24.066 18:11:42 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:24.066 18:11:42 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:04:24.066 18:11:42 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:24.066 18:11:42 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:04:24.066 18:11:42 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:24.066 18:11:42 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:24.066 18:11:42 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:24.066 18:11:42 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:24.066 18:11:42 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:0b:00.0 00:04:24.066 18:11:42 -- common/autotest_common.sh@1577 -- # [[ -z 0000:0b:00.0 ]] 00:04:24.066 18:11:42 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1119449 00:04:24.066 18:11:42 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.066 18:11:42 -- common/autotest_common.sh@1583 -- # waitforlisten 1119449 00:04:24.067 18:11:42 -- common/autotest_common.sh@831 -- # '[' -z 1119449 ']' 00:04:24.067 18:11:42 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.067 18:11:42 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.067 18:11:42 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.067 18:11:42 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.067 18:11:42 -- common/autotest_common.sh@10 -- # set +x 00:04:24.067 [2024-10-08 18:11:42.272123] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:04:24.067 [2024-10-08 18:11:42.272214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119449 ] 00:04:24.067 [2024-10-08 18:11:42.329951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.324 [2024-10-08 18:11:42.440879] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.582 18:11:42 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:24.582 18:11:42 -- common/autotest_common.sh@864 -- # return 0 00:04:24.582 18:11:42 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:24.582 18:11:42 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:24.582 18:11:42 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:04:27.862 nvme0n1 00:04:27.862 18:11:45 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:27.862 [2024-10-08 18:11:46.062733] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:27.862 [2024-10-08 18:11:46.062775] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:27.862 request: 00:04:27.862 { 00:04:27.862 "nvme_ctrlr_name": "nvme0", 00:04:27.862 "password": "test", 00:04:27.862 "method": "bdev_nvme_opal_revert", 00:04:27.862 "req_id": 1 00:04:27.862 } 00:04:27.862 Got JSON-RPC error response 00:04:27.862 response: 00:04:27.862 { 00:04:27.862 "code": -32603, 00:04:27.862 "message": "Internal error" 00:04:27.862 } 00:04:27.862 18:11:46 -- common/autotest_common.sh@1589 -- # true 00:04:27.862 18:11:46 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:27.862 18:11:46 -- common/autotest_common.sh@1593 -- # killprocess 1119449 00:04:27.862 18:11:46 -- common/autotest_common.sh@950 -- # '[' -z 1119449 ']' 00:04:27.862 18:11:46 -- common/autotest_common.sh@954 -- # kill -0 1119449 00:04:27.863 18:11:46 -- common/autotest_common.sh@955 -- # uname 00:04:27.863 18:11:46 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.863 18:11:46 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1119449 00:04:27.863 18:11:46 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.863 18:11:46 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.863 18:11:46 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1119449' 00:04:27.863 killing process with pid 1119449 00:04:27.863 18:11:46 -- common/autotest_common.sh@969 -- # kill 1119449 00:04:27.863 18:11:46 -- common/autotest_common.sh@974 -- # wait 1119449 00:04:29.767 18:11:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:29.767 18:11:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:29.767 18:11:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:29.767 18:11:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:29.767 18:11:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:29.767 18:11:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:29.767 18:11:47 -- common/autotest_common.sh@10 -- # set +x 00:04:29.767 18:11:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:29.767 18:11:47 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:29.767 18:11:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.767 18:11:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.767 18:11:47 -- common/autotest_common.sh@10 -- # set +x 00:04:29.767 ************************************ 00:04:29.767 START TEST env 00:04:29.767 ************************************ 00:04:29.767 18:11:47 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:29.767 * Looking for test storage... 00:04:29.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:29.767 18:11:47 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:29.767 18:11:48 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:29.767 18:11:48 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:30.034 18:11:48 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:30.034 18:11:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.034 18:11:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.034 18:11:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.034 18:11:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.034 18:11:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.034 18:11:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.034 18:11:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.034 18:11:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.034 18:11:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.034 18:11:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.034 18:11:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.034 18:11:48 env -- scripts/common.sh@344 -- # case "$op" in 00:04:30.034 18:11:48 env -- scripts/common.sh@345 -- # : 1 00:04:30.034 18:11:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.034 18:11:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.034 18:11:48 env -- scripts/common.sh@365 -- # decimal 1 00:04:30.034 18:11:48 env -- scripts/common.sh@353 -- # local d=1 00:04:30.034 18:11:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.034 18:11:48 env -- scripts/common.sh@355 -- # echo 1 00:04:30.034 18:11:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.034 18:11:48 env -- scripts/common.sh@366 -- # decimal 2 00:04:30.034 18:11:48 env -- scripts/common.sh@353 -- # local d=2 00:04:30.034 18:11:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.034 18:11:48 env -- scripts/common.sh@355 -- # echo 2 00:04:30.034 18:11:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.034 18:11:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.034 18:11:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.034 18:11:48 env -- scripts/common.sh@368 -- # return 0 00:04:30.034 18:11:48 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.034 18:11:48 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:30.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.034 --rc genhtml_branch_coverage=1 00:04:30.034 --rc genhtml_function_coverage=1 00:04:30.034 --rc genhtml_legend=1 00:04:30.034 --rc geninfo_all_blocks=1 00:04:30.034 --rc geninfo_unexecuted_blocks=1 00:04:30.034 00:04:30.034 ' 00:04:30.034 18:11:48 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:30.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.034 --rc genhtml_branch_coverage=1 00:04:30.034 --rc genhtml_function_coverage=1 00:04:30.034 --rc genhtml_legend=1 00:04:30.034 --rc geninfo_all_blocks=1 00:04:30.034 --rc geninfo_unexecuted_blocks=1 00:04:30.034 00:04:30.034 ' 00:04:30.034 18:11:48 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:30.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.034 --rc genhtml_branch_coverage=1 00:04:30.034 --rc genhtml_function_coverage=1 00:04:30.034 --rc genhtml_legend=1 00:04:30.034 --rc geninfo_all_blocks=1 00:04:30.034 --rc geninfo_unexecuted_blocks=1 00:04:30.034 00:04:30.034 ' 00:04:30.034 18:11:48 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:30.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.034 --rc genhtml_branch_coverage=1 00:04:30.034 --rc genhtml_function_coverage=1 00:04:30.034 --rc genhtml_legend=1 00:04:30.034 --rc geninfo_all_blocks=1 00:04:30.034 --rc geninfo_unexecuted_blocks=1 00:04:30.034 00:04:30.034 ' 00:04:30.034 18:11:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.034 18:11:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.034 18:11:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.034 18:11:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.034 ************************************ 00:04:30.034 START TEST env_memory 00:04:30.034 ************************************ 00:04:30.034 18:11:48 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:30.034 00:04:30.034 00:04:30.034 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.034 http://cunit.sourceforge.net/ 00:04:30.034 00:04:30.034 00:04:30.034 Suite: memory 00:04:30.034 Test: alloc and free memory map ...[2024-10-08 18:11:48.147585] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:30.034 passed 00:04:30.034 Test: mem map translation ...[2024-10-08 18:11:48.168736] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:30.034 [2024-10-08 18:11:48.168759] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:30.034 [2024-10-08 18:11:48.168818] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:30.034 [2024-10-08 18:11:48.168830] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:30.034 passed 00:04:30.034 Test: mem map registration ...[2024-10-08 18:11:48.212465] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:30.034 [2024-10-08 18:11:48.212487] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:30.034 passed 00:04:30.034 Test: mem map adjacent registrations ...passed 00:04:30.034 00:04:30.034 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.034 suites 1 1 n/a 0 0 00:04:30.034 tests 4 4 4 0 0 00:04:30.034 asserts 152 152 152 0 n/a 00:04:30.034 00:04:30.034 Elapsed time = 0.145 seconds 00:04:30.034 00:04:30.034 real 0m0.154s 00:04:30.034 user 0m0.142s 00:04:30.034 sys 0m0.012s 00:04:30.034 18:11:48 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.034 18:11:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:30.034 ************************************ 00:04:30.034 END TEST env_memory 00:04:30.034 ************************************ 00:04:30.034 18:11:48 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.034 18:11:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.034 18:11:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.034 18:11:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.034 ************************************ 00:04:30.034 START TEST env_vtophys 00:04:30.034 ************************************ 00:04:30.034 18:11:48 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:30.298 EAL: lib.eal log level changed from notice to debug 00:04:30.298 EAL: Detected lcore 0 as core 0 on socket 0 00:04:30.298 EAL: Detected lcore 1 as core 1 on socket 0 00:04:30.298 EAL: Detected lcore 2 as core 2 on socket 0 00:04:30.298 EAL: Detected lcore 3 as core 3 on socket 0 00:04:30.298 EAL: Detected lcore 4 as core 4 on socket 0 00:04:30.298 EAL: Detected lcore 5 as core 5 on socket 0 00:04:30.298 EAL: Detected lcore 6 as core 8 on socket 0 00:04:30.298 EAL: Detected lcore 7 as core 9 on socket 0 00:04:30.298 EAL: Detected lcore 8 as core 10 on socket 0 00:04:30.298 EAL: Detected lcore 9 as core 11 on socket 0 00:04:30.298 EAL: Detected lcore 10 as core 12 on socket 0 00:04:30.298 EAL: Detected lcore 11 as core 13 on socket 0 00:04:30.298 EAL: Detected lcore 12 as core 0 on socket 1 00:04:30.298 EAL: Detected lcore 13 as core 1 on socket 1 00:04:30.298 EAL: Detected lcore 14 as core 2 on socket 1 00:04:30.298 EAL: Detected lcore 15 as core 3 on socket 1 00:04:30.298 EAL: Detected lcore 16 as core 4 on socket 1 00:04:30.298 EAL: Detected lcore 17 as core 5 on socket 1 00:04:30.298 EAL: Detected lcore 18 as core 8 on socket 1 00:04:30.298 EAL: Detected lcore 19 as core 9 on socket 1 00:04:30.298 EAL: Detected lcore 20 as core 10 on socket 1 00:04:30.298 EAL: Detected lcore 21 as core 11 on socket 1 00:04:30.298 EAL: Detected lcore 22 as core 12 on socket 1 00:04:30.298 EAL: Detected lcore 23 as core 13 on socket 1 00:04:30.298 EAL: Detected lcore 24 as core 0 on socket 0 00:04:30.298 EAL: Detected lcore 25 as core 1 on socket 0 00:04:30.298 EAL: Detected lcore 26 as core 2 on socket 0 00:04:30.298 EAL: Detected lcore 27 as core 3 on socket 0 00:04:30.298 EAL: Detected lcore 28 as core 4 on socket 0 00:04:30.298 EAL: Detected lcore 29 as core 5 on socket 0 00:04:30.298 EAL: Detected lcore 30 as core 8 on socket 0 00:04:30.298 EAL: Detected lcore 31 as core 9 on socket 0 00:04:30.298 EAL: Detected lcore 32 as core 10 on socket 0 00:04:30.298 EAL: Detected lcore 33 as core 11 on socket 0 00:04:30.298 EAL: Detected lcore 34 as core 12 on socket 0 00:04:30.298 EAL: Detected lcore 35 as core 13 on socket 0 00:04:30.298 EAL: Detected lcore 36 as core 0 on socket 1 00:04:30.298 EAL: Detected lcore 37 as core 1 on socket 1 00:04:30.298 EAL: Detected lcore 38 as core 2 on socket 1 00:04:30.298 EAL: Detected lcore 39 as core 3 on socket 1 00:04:30.298 EAL: Detected lcore 40 as core 4 on socket 1 00:04:30.298 EAL: Detected lcore 41 as core 5 on socket 1 00:04:30.298 EAL: Detected lcore 42 as core 8 on socket 1 00:04:30.298 EAL: Detected lcore 43 as core 9 on socket 1 00:04:30.298 EAL: Detected lcore 44 as core 10 on socket 1 00:04:30.298 EAL: Detected lcore 45 as core 11 on socket 1 00:04:30.298 EAL: Detected lcore 46 as core 12 on socket 1 00:04:30.298 EAL: Detected lcore 47 as core 13 on socket 1 00:04:30.298 EAL: Maximum logical cores by configuration: 128 00:04:30.298 EAL: Detected CPU lcores: 48 00:04:30.298 EAL: Detected NUMA nodes: 2 00:04:30.298 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:30.298 EAL: Detected shared linkage of DPDK 00:04:30.298 EAL: No shared files mode enabled, IPC will be disabled 00:04:30.298 EAL: Bus pci wants IOVA as 'DC' 00:04:30.298 EAL: Buses did not request a specific IOVA mode. 00:04:30.298 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:30.298 EAL: Selected IOVA mode 'VA' 00:04:30.298 EAL: Probing VFIO support... 00:04:30.298 EAL: IOMMU type 1 (Type 1) is supported 00:04:30.298 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:30.298 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:30.298 EAL: VFIO support initialized 00:04:30.298 EAL: Ask a virtual area of 0x2e000 bytes 00:04:30.298 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:30.298 EAL: Setting up physically contiguous memory... 00:04:30.298 EAL: Setting maximum number of open files to 524288 00:04:30.298 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:30.298 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:30.298 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:30.298 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.298 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:30.298 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.298 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.298 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:30.298 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:30.298 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.298 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:30.298 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.298 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.298 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:30.298 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:30.298 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.299 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:30.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.299 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:30.299 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:30.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.299 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:30.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:30.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.299 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:30.299 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:30.299 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:30.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.299 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:30.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.299 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:30.299 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:30.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.299 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:30.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.299 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:30.299 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:30.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.299 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:30.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.299 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:30.299 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:30.299 EAL: Ask a virtual area of 0x61000 bytes 00:04:30.299 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:30.299 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:30.299 EAL: Ask a virtual area of 0x400000000 bytes 00:04:30.299 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:30.299 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:30.299 EAL: Hugepages will be freed exactly as allocated. 00:04:30.299 EAL: No shared files mode enabled, IPC is disabled 00:04:30.299 EAL: No shared files mode enabled, IPC is disabled 00:04:30.299 EAL: TSC frequency is ~2700000 KHz 00:04:30.299 EAL: Main lcore 0 is ready (tid=7f7c9cae8a00;cpuset=[0]) 00:04:30.299 EAL: Trying to obtain current memory policy. 00:04:30.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.299 EAL: Restoring previous memory policy: 0 00:04:30.299 EAL: request: mp_malloc_sync 00:04:30.299 EAL: No shared files mode enabled, IPC is disabled 00:04:30.299 EAL: Heap on socket 0 was expanded by 2MB 00:04:30.299 EAL: No shared files mode enabled, IPC is disabled 00:04:30.299 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:30.299 EAL: Mem event callback 'spdk:(nil)' registered 00:04:30.299 00:04:30.299 00:04:30.299 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.299 http://cunit.sourceforge.net/ 00:04:30.299 00:04:30.299 00:04:30.299 Suite: components_suite 00:04:30.299 Test: vtophys_malloc_test ...passed 00:04:30.299 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:30.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.299 EAL: Restoring previous memory policy: 4 00:04:30.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.299 EAL: request: mp_malloc_sync 00:04:30.299 EAL: No shared files mode enabled, IPC is disabled 00:04:30.299 EAL: Heap on socket 0 was expanded by 4MB 00:04:30.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.299 EAL: request: mp_malloc_sync 00:04:30.299 EAL: No shared files mode enabled, IPC is disabled 00:04:30.299 EAL: Heap on socket 0 was shrunk by 4MB 00:04:30.299 EAL: Trying to obtain current memory policy. 00:04:30.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.299 EAL: Restoring previous memory policy: 4 00:04:30.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.299 EAL: request: mp_malloc_sync 00:04:30.299 EAL: No shared files mode enabled, IPC is disabled 00:04:30.299 EAL: Heap on socket 0 was expanded by 6MB 00:04:30.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.299 EAL: request: mp_malloc_sync 00:04:30.299 EAL: No shared files mode enabled, IPC is disabled 00:04:30.299 EAL: Heap on socket 0 was shrunk by 6MB 00:04:30.299 EAL: Trying to obtain current memory policy. 00:04:30.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.299 EAL: Restoring previous memory policy: 4 00:04:30.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.299 EAL: request: mp_malloc_sync 00:04:30.299 EAL: No shared files mode enabled, IPC is disabled 00:04:30.299 EAL: Heap on socket 0 was expanded by 10MB 00:04:30.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.299 EAL: request: mp_malloc_sync 00:04:30.299 EAL: No shared files mode enabled, IPC is disabled 00:04:30.299 EAL: Heap on socket 0 was shrunk by 10MB 00:04:30.299 EAL: Trying to obtain current memory policy. 00:04:30.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.299 EAL: Restoring previous memory policy: 4 00:04:30.299 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.299 EAL: request: mp_malloc_sync 00:04:30.300 EAL: No shared files mode enabled, IPC is disabled 00:04:30.300 EAL: Heap on socket 0 was expanded by 18MB 00:04:30.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.300 EAL: request: mp_malloc_sync 00:04:30.300 EAL: No shared files mode enabled, IPC is disabled 00:04:30.300 EAL: Heap on socket 0 was shrunk by 18MB 00:04:30.300 EAL: Trying to obtain current memory policy. 00:04:30.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.300 EAL: Restoring previous memory policy: 4 00:04:30.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.300 EAL: request: mp_malloc_sync 00:04:30.300 EAL: No shared files mode enabled, IPC is disabled 00:04:30.300 EAL: Heap on socket 0 was expanded by 34MB 00:04:30.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.300 EAL: request: mp_malloc_sync 00:04:30.300 EAL: No shared files mode enabled, IPC is disabled 00:04:30.300 EAL: Heap on socket 0 was shrunk by 34MB 00:04:30.300 EAL: Trying to obtain current memory policy. 00:04:30.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.300 EAL: Restoring previous memory policy: 4 00:04:30.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.300 EAL: request: mp_malloc_sync 00:04:30.300 EAL: No shared files mode enabled, IPC is disabled 00:04:30.300 EAL: Heap on socket 0 was expanded by 66MB 00:04:30.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.300 EAL: request: mp_malloc_sync 00:04:30.300 EAL: No shared files mode enabled, IPC is disabled 00:04:30.300 EAL: Heap on socket 0 was shrunk by 66MB 00:04:30.300 EAL: Trying to obtain current memory policy. 00:04:30.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.300 EAL: Restoring previous memory policy: 4 00:04:30.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.300 EAL: request: mp_malloc_sync 00:04:30.300 EAL: No shared files mode enabled, IPC is disabled 00:04:30.300 EAL: Heap on socket 0 was expanded by 130MB 00:04:30.300 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.300 EAL: request: mp_malloc_sync 00:04:30.300 EAL: No shared files mode enabled, IPC is disabled 00:04:30.300 EAL: Heap on socket 0 was shrunk by 130MB 00:04:30.300 EAL: Trying to obtain current memory policy. 00:04:30.300 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.565 EAL: Restoring previous memory policy: 4 00:04:30.565 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.565 EAL: request: mp_malloc_sync 00:04:30.565 EAL: No shared files mode enabled, IPC is disabled 00:04:30.565 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.565 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.565 EAL: request: mp_malloc_sync 00:04:30.565 EAL: No shared files mode enabled, IPC is disabled 00:04:30.565 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.565 EAL: Trying to obtain current memory policy. 00:04:30.565 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.823 EAL: Restoring previous memory policy: 4 00:04:30.823 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.823 EAL: request: mp_malloc_sync 00:04:30.823 EAL: No shared files mode enabled, IPC is disabled 00:04:30.823 EAL: Heap on socket 0 was expanded by 514MB 00:04:30.823 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.823 EAL: request: mp_malloc_sync 00:04:30.823 EAL: No shared files mode enabled, IPC is disabled 00:04:30.823 EAL: Heap on socket 0 was shrunk by 514MB 00:04:30.823 EAL: Trying to obtain current memory policy. 00:04:30.823 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.081 EAL: Restoring previous memory policy: 4 00:04:31.081 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.081 EAL: request: mp_malloc_sync 00:04:31.081 EAL: No shared files mode enabled, IPC is disabled 00:04:31.081 EAL: Heap on socket 0 was expanded by 1026MB 00:04:31.338 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.596 EAL: request: mp_malloc_sync 00:04:31.596 EAL: No shared files mode enabled, IPC is disabled 00:04:31.596 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.596 passed 00:04:31.596 00:04:31.596 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.596 suites 1 1 n/a 0 0 00:04:31.596 tests 2 2 2 0 0 00:04:31.596 asserts 497 497 497 0 n/a 00:04:31.596 00:04:31.596 Elapsed time = 1.360 seconds 00:04:31.596 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.596 EAL: request: mp_malloc_sync 00:04:31.596 EAL: No shared files mode enabled, IPC is disabled 00:04:31.596 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.596 EAL: No shared files mode enabled, IPC is disabled 00:04:31.596 EAL: No shared files mode enabled, IPC is disabled 00:04:31.596 EAL: No shared files mode enabled, IPC is disabled 00:04:31.596 00:04:31.596 real 0m1.481s 00:04:31.596 user 0m0.864s 00:04:31.596 sys 0m0.575s 00:04:31.596 18:11:49 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.596 18:11:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.596 ************************************ 00:04:31.596 END TEST env_vtophys 00:04:31.596 ************************************ 00:04:31.596 18:11:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.596 18:11:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.596 18:11:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.596 18:11:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.596 ************************************ 00:04:31.596 START TEST env_pci 00:04:31.596 ************************************ 00:04:31.596 18:11:49 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:31.596 00:04:31.596 00:04:31.596 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.596 http://cunit.sourceforge.net/ 00:04:31.596 00:04:31.596 00:04:31.596 Suite: pci 00:04:31.596 Test: pci_hook ...[2024-10-08 18:11:49.858883] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1120363 has claimed it 00:04:31.596 EAL: Cannot find device (10000:00:01.0) 00:04:31.596 EAL: Failed to attach device on primary process 00:04:31.596 passed 00:04:31.596 00:04:31.596 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.596 suites 1 1 n/a 0 0 00:04:31.596 tests 1 1 1 0 0 00:04:31.596 asserts 25 25 25 0 n/a 00:04:31.596 00:04:31.596 Elapsed time = 0.022 seconds 00:04:31.855 00:04:31.855 real 0m0.035s 00:04:31.855 user 0m0.010s 00:04:31.856 sys 0m0.025s 00:04:31.856 18:11:49 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.856 18:11:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.856 ************************************ 00:04:31.856 END TEST env_pci 00:04:31.856 ************************************ 00:04:31.856 18:11:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.856 18:11:49 env -- env/env.sh@15 -- # uname 00:04:31.856 18:11:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.856 18:11:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.856 18:11:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.856 18:11:49 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:31.856 18:11:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.856 18:11:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.856 ************************************ 00:04:31.856 START TEST env_dpdk_post_init 00:04:31.856 ************************************ 00:04:31.856 18:11:49 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.856 EAL: Detected CPU lcores: 48 00:04:31.856 EAL: Detected NUMA nodes: 2 00:04:31.856 EAL: Detected shared linkage of DPDK 00:04:31.856 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.856 EAL: Selected IOVA mode 'VA' 00:04:31.856 EAL: VFIO support initialized 00:04:31.856 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:31.856 EAL: Using IOMMU type 1 (Type 1) 00:04:31.856 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:31.856 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:31.856 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:31.856 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:31.856 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:31.856 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:31.856 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:31.856 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:32.794 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:04:32.794 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:32.794 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:32.794 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:32.794 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:32.794 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:32.794 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:32.794 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:32.794 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:36.075 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:04:36.076 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:04:36.076 Starting DPDK initialization... 00:04:36.076 Starting SPDK post initialization... 00:04:36.076 SPDK NVMe probe 00:04:36.076 Attaching to 0000:0b:00.0 00:04:36.076 Attached to 0000:0b:00.0 00:04:36.076 Cleaning up... 00:04:36.076 00:04:36.076 real 0m4.343s 00:04:36.076 user 0m2.975s 00:04:36.076 sys 0m0.424s 00:04:36.076 18:11:54 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.076 18:11:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.076 ************************************ 00:04:36.076 END TEST env_dpdk_post_init 00:04:36.076 ************************************ 00:04:36.076 18:11:54 env -- env/env.sh@26 -- # uname 00:04:36.076 18:11:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.076 18:11:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.076 18:11:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.076 18:11:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.076 18:11:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.076 ************************************ 00:04:36.076 START TEST env_mem_callbacks 00:04:36.076 ************************************ 00:04:36.076 18:11:54 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.076 EAL: Detected CPU lcores: 48 00:04:36.076 EAL: Detected NUMA nodes: 2 00:04:36.076 EAL: Detected shared linkage of DPDK 00:04:36.076 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.334 EAL: Selected IOVA mode 'VA' 00:04:36.334 EAL: VFIO support initialized 00:04:36.334 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.334 00:04:36.334 00:04:36.334 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.334 http://cunit.sourceforge.net/ 00:04:36.334 00:04:36.334 00:04:36.334 Suite: memory 00:04:36.334 Test: test ... 00:04:36.334 register 0x200000200000 2097152 00:04:36.334 malloc 3145728 00:04:36.334 register 0x200000400000 4194304 00:04:36.334 buf 0x200000500000 len 3145728 PASSED 00:04:36.334 malloc 64 00:04:36.334 buf 0x2000004fff40 len 64 PASSED 00:04:36.334 malloc 4194304 00:04:36.334 register 0x200000800000 6291456 00:04:36.334 buf 0x200000a00000 len 4194304 PASSED 00:04:36.334 free 0x200000500000 3145728 00:04:36.334 free 0x2000004fff40 64 00:04:36.334 unregister 0x200000400000 4194304 PASSED 00:04:36.334 free 0x200000a00000 4194304 00:04:36.334 unregister 0x200000800000 6291456 PASSED 00:04:36.335 malloc 8388608 00:04:36.335 register 0x200000400000 10485760 00:04:36.335 buf 0x200000600000 len 8388608 PASSED 00:04:36.335 free 0x200000600000 8388608 00:04:36.335 unregister 0x200000400000 10485760 PASSED 00:04:36.335 passed 00:04:36.335 00:04:36.335 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.335 suites 1 1 n/a 0 0 00:04:36.335 tests 1 1 1 0 0 00:04:36.335 asserts 15 15 15 0 n/a 00:04:36.335 00:04:36.335 Elapsed time = 0.005 seconds 00:04:36.335 00:04:36.335 real 0m0.049s 00:04:36.335 user 0m0.013s 00:04:36.335 sys 0m0.035s 00:04:36.335 18:11:54 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.335 18:11:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:36.335 ************************************ 00:04:36.335 END TEST env_mem_callbacks 00:04:36.335 ************************************ 00:04:36.335 00:04:36.335 real 0m6.458s 00:04:36.335 user 0m4.219s 00:04:36.335 sys 0m1.273s 00:04:36.335 18:11:54 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.335 18:11:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.335 ************************************ 00:04:36.335 END TEST env 00:04:36.335 ************************************ 00:04:36.335 18:11:54 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.335 18:11:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.335 18:11:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.335 18:11:54 -- common/autotest_common.sh@10 -- # set +x 00:04:36.335 ************************************ 00:04:36.335 START TEST rpc 00:04:36.335 ************************************ 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:36.335 * Looking for test storage... 00:04:36.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:36.335 18:11:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.335 18:11:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.335 18:11:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.335 18:11:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.335 18:11:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.335 18:11:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.335 18:11:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.335 18:11:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.335 18:11:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.335 18:11:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.335 18:11:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.335 18:11:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.335 18:11:54 rpc -- scripts/common.sh@345 -- # : 1 00:04:36.335 18:11:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.335 18:11:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.335 18:11:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.335 18:11:54 rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.335 18:11:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.335 18:11:54 rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.335 18:11:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.335 18:11:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.335 18:11:54 rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.335 18:11:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.335 18:11:54 rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.335 18:11:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.335 18:11:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.335 18:11:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.335 18:11:54 rpc -- scripts/common.sh@368 -- # return 0 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:36.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.335 --rc genhtml_branch_coverage=1 00:04:36.335 --rc genhtml_function_coverage=1 00:04:36.335 --rc genhtml_legend=1 00:04:36.335 --rc geninfo_all_blocks=1 00:04:36.335 --rc geninfo_unexecuted_blocks=1 00:04:36.335 00:04:36.335 ' 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:36.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.335 --rc genhtml_branch_coverage=1 00:04:36.335 --rc genhtml_function_coverage=1 00:04:36.335 --rc genhtml_legend=1 00:04:36.335 --rc geninfo_all_blocks=1 00:04:36.335 --rc geninfo_unexecuted_blocks=1 00:04:36.335 00:04:36.335 ' 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:36.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.335 --rc genhtml_branch_coverage=1 00:04:36.335 --rc genhtml_function_coverage=1 00:04:36.335 --rc genhtml_legend=1 00:04:36.335 --rc geninfo_all_blocks=1 00:04:36.335 --rc geninfo_unexecuted_blocks=1 00:04:36.335 00:04:36.335 ' 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:36.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.335 --rc genhtml_branch_coverage=1 00:04:36.335 --rc genhtml_function_coverage=1 00:04:36.335 --rc genhtml_legend=1 00:04:36.335 --rc geninfo_all_blocks=1 00:04:36.335 --rc geninfo_unexecuted_blocks=1 00:04:36.335 00:04:36.335 ' 00:04:36.335 18:11:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1121129 00:04:36.335 18:11:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:36.335 18:11:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.335 18:11:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1121129 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@831 -- # '[' -z 1121129 ']' 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:36.335 18:11:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.594 [2024-10-08 18:11:54.645434] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:04:36.594 [2024-10-08 18:11:54.645520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121129 ] 00:04:36.594 [2024-10-08 18:11:54.702699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.594 [2024-10-08 18:11:54.812161] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:36.594 [2024-10-08 18:11:54.812213] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1121129' to capture a snapshot of events at runtime. 00:04:36.594 [2024-10-08 18:11:54.812226] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:36.594 [2024-10-08 18:11:54.812242] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:36.594 [2024-10-08 18:11:54.812252] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1121129 for offline analysis/debug. 00:04:36.594 [2024-10-08 18:11:54.812864] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.853 18:11:55 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.853 18:11:55 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:36.853 18:11:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.853 18:11:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.853 18:11:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:36.853 18:11:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:36.853 18:11:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.853 18:11:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.853 18:11:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.853 ************************************ 00:04:36.853 START TEST rpc_integrity 00:04:36.853 ************************************ 00:04:36.853 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:36.853 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.853 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.853 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.853 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.853 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.853 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.111 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.111 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.112 { 00:04:37.112 "name": "Malloc0", 00:04:37.112 "aliases": [ 00:04:37.112 "9373c344-9732-46a4-ad97-ca292584f26d" 00:04:37.112 ], 00:04:37.112 "product_name": "Malloc disk", 00:04:37.112 "block_size": 512, 00:04:37.112 "num_blocks": 16384, 00:04:37.112 "uuid": "9373c344-9732-46a4-ad97-ca292584f26d", 00:04:37.112 "assigned_rate_limits": { 00:04:37.112 "rw_ios_per_sec": 0, 00:04:37.112 "rw_mbytes_per_sec": 0, 00:04:37.112 "r_mbytes_per_sec": 0, 00:04:37.112 "w_mbytes_per_sec": 0 00:04:37.112 }, 00:04:37.112 "claimed": false, 00:04:37.112 "zoned": false, 00:04:37.112 "supported_io_types": { 00:04:37.112 "read": true, 00:04:37.112 "write": true, 00:04:37.112 "unmap": true, 00:04:37.112 "flush": true, 00:04:37.112 "reset": true, 00:04:37.112 "nvme_admin": false, 00:04:37.112 "nvme_io": false, 00:04:37.112 "nvme_io_md": false, 00:04:37.112 "write_zeroes": true, 00:04:37.112 "zcopy": true, 00:04:37.112 "get_zone_info": false, 00:04:37.112 "zone_management": false, 00:04:37.112 "zone_append": false, 00:04:37.112 "compare": false, 00:04:37.112 "compare_and_write": false, 00:04:37.112 "abort": true, 00:04:37.112 "seek_hole": false, 00:04:37.112 "seek_data": false, 00:04:37.112 "copy": true, 00:04:37.112 "nvme_iov_md": false 00:04:37.112 }, 00:04:37.112 "memory_domains": [ 00:04:37.112 { 00:04:37.112 "dma_device_id": "system", 00:04:37.112 "dma_device_type": 1 00:04:37.112 }, 00:04:37.112 { 00:04:37.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.112 "dma_device_type": 2 00:04:37.112 } 00:04:37.112 ], 00:04:37.112 "driver_specific": {} 00:04:37.112 } 00:04:37.112 ]' 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 [2024-10-08 18:11:55.205183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:37.112 [2024-10-08 18:11:55.205218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.112 [2024-10-08 18:11:55.205239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1317800 00:04:37.112 [2024-10-08 18:11:55.205251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.112 [2024-10-08 18:11:55.206626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.112 [2024-10-08 18:11:55.206663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.112 Passthru0 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.112 { 00:04:37.112 "name": "Malloc0", 00:04:37.112 "aliases": [ 00:04:37.112 "9373c344-9732-46a4-ad97-ca292584f26d" 00:04:37.112 ], 00:04:37.112 "product_name": "Malloc disk", 00:04:37.112 "block_size": 512, 00:04:37.112 "num_blocks": 16384, 00:04:37.112 "uuid": "9373c344-9732-46a4-ad97-ca292584f26d", 00:04:37.112 "assigned_rate_limits": { 00:04:37.112 "rw_ios_per_sec": 0, 00:04:37.112 "rw_mbytes_per_sec": 0, 00:04:37.112 "r_mbytes_per_sec": 0, 00:04:37.112 "w_mbytes_per_sec": 0 00:04:37.112 }, 00:04:37.112 "claimed": true, 00:04:37.112 "claim_type": "exclusive_write", 00:04:37.112 "zoned": false, 00:04:37.112 "supported_io_types": { 00:04:37.112 "read": true, 00:04:37.112 "write": true, 00:04:37.112 "unmap": true, 00:04:37.112 "flush": true, 00:04:37.112 "reset": true, 00:04:37.112 "nvme_admin": false, 00:04:37.112 "nvme_io": false, 00:04:37.112 "nvme_io_md": false, 00:04:37.112 "write_zeroes": true, 00:04:37.112 "zcopy": true, 00:04:37.112 "get_zone_info": false, 00:04:37.112 "zone_management": false, 00:04:37.112 "zone_append": false, 00:04:37.112 "compare": false, 00:04:37.112 "compare_and_write": false, 00:04:37.112 "abort": true, 00:04:37.112 "seek_hole": false, 00:04:37.112 "seek_data": false, 00:04:37.112 "copy": true, 00:04:37.112 "nvme_iov_md": false 00:04:37.112 }, 00:04:37.112 "memory_domains": [ 00:04:37.112 { 00:04:37.112 "dma_device_id": "system", 00:04:37.112 "dma_device_type": 1 00:04:37.112 }, 00:04:37.112 { 00:04:37.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.112 "dma_device_type": 2 00:04:37.112 } 00:04:37.112 ], 00:04:37.112 "driver_specific": {} 00:04:37.112 }, 00:04:37.112 { 00:04:37.112 "name": "Passthru0", 00:04:37.112 "aliases": [ 00:04:37.112 "4844b099-6aea-59c6-94a3-53f22f7421e9" 00:04:37.112 ], 00:04:37.112 "product_name": "passthru", 00:04:37.112 "block_size": 512, 00:04:37.112 "num_blocks": 16384, 00:04:37.112 "uuid": "4844b099-6aea-59c6-94a3-53f22f7421e9", 00:04:37.112 "assigned_rate_limits": { 00:04:37.112 "rw_ios_per_sec": 0, 00:04:37.112 "rw_mbytes_per_sec": 0, 00:04:37.112 "r_mbytes_per_sec": 0, 00:04:37.112 "w_mbytes_per_sec": 0 00:04:37.112 }, 00:04:37.112 "claimed": false, 00:04:37.112 "zoned": false, 00:04:37.112 "supported_io_types": { 00:04:37.112 "read": true, 00:04:37.112 "write": true, 00:04:37.112 "unmap": true, 00:04:37.112 "flush": true, 00:04:37.112 "reset": true, 00:04:37.112 "nvme_admin": false, 00:04:37.112 "nvme_io": false, 00:04:37.112 "nvme_io_md": false, 00:04:37.112 "write_zeroes": true, 00:04:37.112 "zcopy": true, 00:04:37.112 "get_zone_info": false, 00:04:37.112 "zone_management": false, 00:04:37.112 "zone_append": false, 00:04:37.112 "compare": false, 00:04:37.112 "compare_and_write": false, 00:04:37.112 "abort": true, 00:04:37.112 "seek_hole": false, 00:04:37.112 "seek_data": false, 00:04:37.112 "copy": true, 00:04:37.112 "nvme_iov_md": false 00:04:37.112 }, 00:04:37.112 "memory_domains": [ 00:04:37.112 { 00:04:37.112 "dma_device_id": "system", 00:04:37.112 "dma_device_type": 1 00:04:37.112 }, 00:04:37.112 { 00:04:37.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.112 "dma_device_type": 2 00:04:37.112 } 00:04:37.112 ], 00:04:37.112 "driver_specific": { 00:04:37.112 "passthru": { 00:04:37.112 "name": "Passthru0", 00:04:37.112 "base_bdev_name": "Malloc0" 00:04:37.112 } 00:04:37.112 } 00:04:37.112 } 00:04:37.112 ]' 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.112 18:11:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.112 00:04:37.112 real 0m0.212s 00:04:37.112 user 0m0.140s 00:04:37.112 sys 0m0.017s 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.112 18:11:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 ************************************ 00:04:37.112 END TEST rpc_integrity 00:04:37.112 ************************************ 00:04:37.112 18:11:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:37.112 18:11:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.112 18:11:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.112 18:11:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 ************************************ 00:04:37.112 START TEST rpc_plugins 00:04:37.112 ************************************ 00:04:37.112 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:37.112 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:37.112 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.112 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.112 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:37.112 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:37.112 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.112 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.112 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.112 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:37.112 { 00:04:37.113 "name": "Malloc1", 00:04:37.113 "aliases": [ 00:04:37.113 "980819df-f56a-41e3-b0fc-eec73de61ac5" 00:04:37.113 ], 00:04:37.113 "product_name": "Malloc disk", 00:04:37.113 "block_size": 4096, 00:04:37.113 "num_blocks": 256, 00:04:37.113 "uuid": "980819df-f56a-41e3-b0fc-eec73de61ac5", 00:04:37.113 "assigned_rate_limits": { 00:04:37.113 "rw_ios_per_sec": 0, 00:04:37.113 "rw_mbytes_per_sec": 0, 00:04:37.113 "r_mbytes_per_sec": 0, 00:04:37.113 "w_mbytes_per_sec": 0 00:04:37.113 }, 00:04:37.113 "claimed": false, 00:04:37.113 "zoned": false, 00:04:37.113 "supported_io_types": { 00:04:37.113 "read": true, 00:04:37.113 "write": true, 00:04:37.113 "unmap": true, 00:04:37.113 "flush": true, 00:04:37.113 "reset": true, 00:04:37.113 "nvme_admin": false, 00:04:37.113 "nvme_io": false, 00:04:37.113 "nvme_io_md": false, 00:04:37.113 "write_zeroes": true, 00:04:37.113 "zcopy": true, 00:04:37.113 "get_zone_info": false, 00:04:37.113 "zone_management": false, 00:04:37.113 "zone_append": false, 00:04:37.113 "compare": false, 00:04:37.113 "compare_and_write": false, 00:04:37.113 "abort": true, 00:04:37.113 "seek_hole": false, 00:04:37.113 "seek_data": false, 00:04:37.113 "copy": true, 00:04:37.113 "nvme_iov_md": false 00:04:37.113 }, 00:04:37.113 "memory_domains": [ 00:04:37.113 { 00:04:37.113 "dma_device_id": "system", 00:04:37.113 "dma_device_type": 1 00:04:37.113 }, 00:04:37.113 { 00:04:37.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.113 "dma_device_type": 2 00:04:37.113 } 00:04:37.113 ], 00:04:37.113 "driver_specific": {} 00:04:37.113 } 00:04:37.113 ]' 00:04:37.113 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:37.374 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:37.374 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:37.374 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.374 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.374 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.374 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:37.374 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.374 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.374 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.374 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:37.374 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:37.374 18:11:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:37.374 00:04:37.374 real 0m0.115s 00:04:37.374 user 0m0.072s 00:04:37.374 sys 0m0.008s 00:04:37.374 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.374 18:11:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:37.374 ************************************ 00:04:37.374 END TEST rpc_plugins 00:04:37.374 ************************************ 00:04:37.374 18:11:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:37.374 18:11:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.374 18:11:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.374 18:11:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.374 ************************************ 00:04:37.374 START TEST rpc_trace_cmd_test 00:04:37.374 ************************************ 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:37.374 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1121129", 00:04:37.374 "tpoint_group_mask": "0x8", 00:04:37.374 "iscsi_conn": { 00:04:37.374 "mask": "0x2", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "scsi": { 00:04:37.374 "mask": "0x4", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "bdev": { 00:04:37.374 "mask": "0x8", 00:04:37.374 "tpoint_mask": "0xffffffffffffffff" 00:04:37.374 }, 00:04:37.374 "nvmf_rdma": { 00:04:37.374 "mask": "0x10", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "nvmf_tcp": { 00:04:37.374 "mask": "0x20", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "ftl": { 00:04:37.374 "mask": "0x40", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "blobfs": { 00:04:37.374 "mask": "0x80", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "dsa": { 00:04:37.374 "mask": "0x200", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "thread": { 00:04:37.374 "mask": "0x400", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "nvme_pcie": { 00:04:37.374 "mask": "0x800", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "iaa": { 00:04:37.374 "mask": "0x1000", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "nvme_tcp": { 00:04:37.374 "mask": "0x2000", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "bdev_nvme": { 00:04:37.374 "mask": "0x4000", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "sock": { 00:04:37.374 "mask": "0x8000", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "blob": { 00:04:37.374 "mask": "0x10000", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "bdev_raid": { 00:04:37.374 "mask": "0x20000", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 }, 00:04:37.374 "scheduler": { 00:04:37.374 "mask": "0x40000", 00:04:37.374 "tpoint_mask": "0x0" 00:04:37.374 } 00:04:37.374 }' 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:37.374 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:37.635 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:37.635 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:37.635 18:11:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:37.635 00:04:37.635 real 0m0.187s 00:04:37.635 user 0m0.161s 00:04:37.635 sys 0m0.015s 00:04:37.635 18:11:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.635 18:11:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:37.635 ************************************ 00:04:37.635 END TEST rpc_trace_cmd_test 00:04:37.635 ************************************ 00:04:37.635 18:11:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:37.635 18:11:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:37.635 18:11:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:37.635 18:11:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.635 18:11:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.635 18:11:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.635 ************************************ 00:04:37.635 START TEST rpc_daemon_integrity 00:04:37.635 ************************************ 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:37.635 { 00:04:37.635 "name": "Malloc2", 00:04:37.635 "aliases": [ 00:04:37.635 "db3da9f4-398f-4816-a084-983b9797c212" 00:04:37.635 ], 00:04:37.635 "product_name": "Malloc disk", 00:04:37.635 "block_size": 512, 00:04:37.635 "num_blocks": 16384, 00:04:37.635 "uuid": "db3da9f4-398f-4816-a084-983b9797c212", 00:04:37.635 "assigned_rate_limits": { 00:04:37.635 "rw_ios_per_sec": 0, 00:04:37.635 "rw_mbytes_per_sec": 0, 00:04:37.635 "r_mbytes_per_sec": 0, 00:04:37.635 "w_mbytes_per_sec": 0 00:04:37.635 }, 00:04:37.635 "claimed": false, 00:04:37.635 "zoned": false, 00:04:37.635 "supported_io_types": { 00:04:37.635 "read": true, 00:04:37.635 "write": true, 00:04:37.635 "unmap": true, 00:04:37.635 "flush": true, 00:04:37.635 "reset": true, 00:04:37.635 "nvme_admin": false, 00:04:37.635 "nvme_io": false, 00:04:37.635 "nvme_io_md": false, 00:04:37.635 "write_zeroes": true, 00:04:37.635 "zcopy": true, 00:04:37.635 "get_zone_info": false, 00:04:37.635 "zone_management": false, 00:04:37.635 "zone_append": false, 00:04:37.635 "compare": false, 00:04:37.635 "compare_and_write": false, 00:04:37.635 "abort": true, 00:04:37.635 "seek_hole": false, 00:04:37.635 "seek_data": false, 00:04:37.635 "copy": true, 00:04:37.635 "nvme_iov_md": false 00:04:37.635 }, 00:04:37.635 "memory_domains": [ 00:04:37.635 { 00:04:37.635 "dma_device_id": "system", 00:04:37.635 "dma_device_type": 1 00:04:37.635 }, 00:04:37.635 { 00:04:37.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.635 "dma_device_type": 2 00:04:37.635 } 00:04:37.635 ], 00:04:37.635 "driver_specific": {} 00:04:37.635 } 00:04:37.635 ]' 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.635 [2024-10-08 18:11:55.859098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:37.635 [2024-10-08 18:11:55.859134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:37.635 [2024-10-08 18:11:55.859159] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1317de0 00:04:37.635 [2024-10-08 18:11:55.859171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:37.635 [2024-10-08 18:11:55.860363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:37.635 [2024-10-08 18:11:55.860390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:37.635 Passthru0 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:37.635 { 00:04:37.635 "name": "Malloc2", 00:04:37.635 "aliases": [ 00:04:37.635 "db3da9f4-398f-4816-a084-983b9797c212" 00:04:37.635 ], 00:04:37.635 "product_name": "Malloc disk", 00:04:37.635 "block_size": 512, 00:04:37.635 "num_blocks": 16384, 00:04:37.635 "uuid": "db3da9f4-398f-4816-a084-983b9797c212", 00:04:37.635 "assigned_rate_limits": { 00:04:37.635 "rw_ios_per_sec": 0, 00:04:37.635 "rw_mbytes_per_sec": 0, 00:04:37.635 "r_mbytes_per_sec": 0, 00:04:37.635 "w_mbytes_per_sec": 0 00:04:37.635 }, 00:04:37.635 "claimed": true, 00:04:37.635 "claim_type": "exclusive_write", 00:04:37.635 "zoned": false, 00:04:37.635 "supported_io_types": { 00:04:37.635 "read": true, 00:04:37.635 "write": true, 00:04:37.635 "unmap": true, 00:04:37.635 "flush": true, 00:04:37.635 "reset": true, 00:04:37.635 "nvme_admin": false, 00:04:37.635 "nvme_io": false, 00:04:37.635 "nvme_io_md": false, 00:04:37.635 "write_zeroes": true, 00:04:37.635 "zcopy": true, 00:04:37.635 "get_zone_info": false, 00:04:37.635 "zone_management": false, 00:04:37.635 "zone_append": false, 00:04:37.635 "compare": false, 00:04:37.635 "compare_and_write": false, 00:04:37.635 "abort": true, 00:04:37.635 "seek_hole": false, 00:04:37.635 "seek_data": false, 00:04:37.635 "copy": true, 00:04:37.635 "nvme_iov_md": false 00:04:37.635 }, 00:04:37.635 "memory_domains": [ 00:04:37.635 { 00:04:37.635 "dma_device_id": "system", 00:04:37.635 "dma_device_type": 1 00:04:37.635 }, 00:04:37.635 { 00:04:37.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.635 "dma_device_type": 2 00:04:37.635 } 00:04:37.635 ], 00:04:37.635 "driver_specific": {} 00:04:37.635 }, 00:04:37.635 { 00:04:37.635 "name": "Passthru0", 00:04:37.635 "aliases": [ 00:04:37.635 "69083cd5-0905-5c5f-805b-225f041f55ed" 00:04:37.635 ], 00:04:37.635 "product_name": "passthru", 00:04:37.635 "block_size": 512, 00:04:37.635 "num_blocks": 16384, 00:04:37.635 "uuid": "69083cd5-0905-5c5f-805b-225f041f55ed", 00:04:37.635 "assigned_rate_limits": { 00:04:37.635 "rw_ios_per_sec": 0, 00:04:37.635 "rw_mbytes_per_sec": 0, 00:04:37.635 "r_mbytes_per_sec": 0, 00:04:37.635 "w_mbytes_per_sec": 0 00:04:37.635 }, 00:04:37.635 "claimed": false, 00:04:37.635 "zoned": false, 00:04:37.635 "supported_io_types": { 00:04:37.635 "read": true, 00:04:37.635 "write": true, 00:04:37.635 "unmap": true, 00:04:37.635 "flush": true, 00:04:37.635 "reset": true, 00:04:37.635 "nvme_admin": false, 00:04:37.635 "nvme_io": false, 00:04:37.635 "nvme_io_md": false, 00:04:37.635 "write_zeroes": true, 00:04:37.635 "zcopy": true, 00:04:37.635 "get_zone_info": false, 00:04:37.635 "zone_management": false, 00:04:37.635 "zone_append": false, 00:04:37.635 "compare": false, 00:04:37.635 "compare_and_write": false, 00:04:37.635 "abort": true, 00:04:37.635 "seek_hole": false, 00:04:37.635 "seek_data": false, 00:04:37.635 "copy": true, 00:04:37.635 "nvme_iov_md": false 00:04:37.635 }, 00:04:37.635 "memory_domains": [ 00:04:37.635 { 00:04:37.635 "dma_device_id": "system", 00:04:37.635 "dma_device_type": 1 00:04:37.635 }, 00:04:37.635 { 00:04:37.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:37.635 "dma_device_type": 2 00:04:37.635 } 00:04:37.635 ], 00:04:37.635 "driver_specific": { 00:04:37.635 "passthru": { 00:04:37.635 "name": "Passthru0", 00:04:37.635 "base_bdev_name": "Malloc2" 00:04:37.635 } 00:04:37.635 } 00:04:37.635 } 00:04:37.635 ]' 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:37.635 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.636 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.636 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.636 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:37.636 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.636 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.893 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.893 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:37.893 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.893 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.893 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.893 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:37.893 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:37.893 18:11:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:37.893 00:04:37.893 real 0m0.208s 00:04:37.893 user 0m0.136s 00:04:37.893 sys 0m0.018s 00:04:37.893 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.893 18:11:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:37.893 ************************************ 00:04:37.893 END TEST rpc_daemon_integrity 00:04:37.893 ************************************ 00:04:37.893 18:11:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:37.893 18:11:55 rpc -- rpc/rpc.sh@84 -- # killprocess 1121129 00:04:37.893 18:11:55 rpc -- common/autotest_common.sh@950 -- # '[' -z 1121129 ']' 00:04:37.893 18:11:55 rpc -- common/autotest_common.sh@954 -- # kill -0 1121129 00:04:37.893 18:11:55 rpc -- common/autotest_common.sh@955 -- # uname 00:04:37.893 18:11:55 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.893 18:11:55 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1121129 00:04:37.893 18:11:56 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.893 18:11:56 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.893 18:11:56 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1121129' 00:04:37.893 killing process with pid 1121129 00:04:37.893 18:11:56 rpc -- common/autotest_common.sh@969 -- # kill 1121129 00:04:37.893 18:11:56 rpc -- common/autotest_common.sh@974 -- # wait 1121129 00:04:38.460 00:04:38.460 real 0m2.046s 00:04:38.460 user 0m2.494s 00:04:38.460 sys 0m0.616s 00:04:38.460 18:11:56 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.460 18:11:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.460 ************************************ 00:04:38.460 END TEST rpc 00:04:38.460 ************************************ 00:04:38.460 18:11:56 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:38.460 18:11:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.460 18:11:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.460 18:11:56 -- common/autotest_common.sh@10 -- # set +x 00:04:38.460 ************************************ 00:04:38.460 START TEST skip_rpc 00:04:38.460 ************************************ 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:38.460 * Looking for test storage... 00:04:38.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.460 18:11:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:38.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.460 --rc genhtml_branch_coverage=1 00:04:38.460 --rc genhtml_function_coverage=1 00:04:38.460 --rc genhtml_legend=1 00:04:38.460 --rc geninfo_all_blocks=1 00:04:38.460 --rc geninfo_unexecuted_blocks=1 00:04:38.460 00:04:38.460 ' 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:38.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.460 --rc genhtml_branch_coverage=1 00:04:38.460 --rc genhtml_function_coverage=1 00:04:38.460 --rc genhtml_legend=1 00:04:38.460 --rc geninfo_all_blocks=1 00:04:38.460 --rc geninfo_unexecuted_blocks=1 00:04:38.460 00:04:38.460 ' 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:38.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.460 --rc genhtml_branch_coverage=1 00:04:38.460 --rc genhtml_function_coverage=1 00:04:38.460 --rc genhtml_legend=1 00:04:38.460 --rc geninfo_all_blocks=1 00:04:38.460 --rc geninfo_unexecuted_blocks=1 00:04:38.460 00:04:38.460 ' 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:38.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.460 --rc genhtml_branch_coverage=1 00:04:38.460 --rc genhtml_function_coverage=1 00:04:38.460 --rc genhtml_legend=1 00:04:38.460 --rc geninfo_all_blocks=1 00:04:38.460 --rc geninfo_unexecuted_blocks=1 00:04:38.460 00:04:38.460 ' 00:04:38.460 18:11:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:38.460 18:11:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:38.460 18:11:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.460 18:11:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.460 ************************************ 00:04:38.460 START TEST skip_rpc 00:04:38.460 ************************************ 00:04:38.460 18:11:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:38.460 18:11:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1121496 00:04:38.460 18:11:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:38.460 18:11:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.460 18:11:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:38.719 [2024-10-08 18:11:56.774142] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:04:38.719 [2024-10-08 18:11:56.774212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121496 ] 00:04:38.719 [2024-10-08 18:11:56.829813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.719 [2024-10-08 18:11:56.934998] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.980 18:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:43.980 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:43.980 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:43.980 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:43.980 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.980 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:43.980 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.980 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:43.980 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.980 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1121496 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1121496 ']' 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1121496 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1121496 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1121496' 00:04:43.981 killing process with pid 1121496 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1121496 00:04:43.981 18:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1121496 00:04:43.981 00:04:43.981 real 0m5.516s 00:04:43.981 user 0m5.197s 00:04:43.981 sys 0m0.328s 00:04:43.981 18:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.981 18:12:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.981 ************************************ 00:04:43.981 END TEST skip_rpc 00:04:43.981 ************************************ 00:04:43.981 18:12:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:43.981 18:12:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.981 18:12:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.981 18:12:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.238 ************************************ 00:04:44.238 START TEST skip_rpc_with_json 00:04:44.238 ************************************ 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1122271 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1122271 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1122271 ']' 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.238 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.238 [2024-10-08 18:12:02.342739] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:04:44.238 [2024-10-08 18:12:02.342821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122271 ] 00:04:44.238 [2024-10-08 18:12:02.402300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.238 [2024-10-08 18:12:02.515241] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.804 [2024-10-08 18:12:02.793556] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:44.804 request: 00:04:44.804 { 00:04:44.804 "trtype": "tcp", 00:04:44.804 "method": "nvmf_get_transports", 00:04:44.804 "req_id": 1 00:04:44.804 } 00:04:44.804 Got JSON-RPC error response 00:04:44.804 response: 00:04:44.804 { 00:04:44.804 "code": -19, 00:04:44.804 "message": "No such device" 00:04:44.804 } 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.804 [2024-10-08 18:12:02.801694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.804 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:44.804 { 00:04:44.804 "subsystems": [ 00:04:44.804 { 00:04:44.804 "subsystem": "fsdev", 00:04:44.804 "config": [ 00:04:44.804 { 00:04:44.804 "method": "fsdev_set_opts", 00:04:44.804 "params": { 00:04:44.804 "fsdev_io_pool_size": 65535, 00:04:44.804 "fsdev_io_cache_size": 256 00:04:44.804 } 00:04:44.804 } 00:04:44.804 ] 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "subsystem": "vfio_user_target", 00:04:44.804 "config": null 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "subsystem": "keyring", 00:04:44.804 "config": [] 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "subsystem": "iobuf", 00:04:44.804 "config": [ 00:04:44.804 { 00:04:44.804 "method": "iobuf_set_options", 00:04:44.804 "params": { 00:04:44.804 "small_pool_count": 8192, 00:04:44.804 "large_pool_count": 1024, 00:04:44.804 "small_bufsize": 8192, 00:04:44.804 "large_bufsize": 135168 00:04:44.804 } 00:04:44.804 } 00:04:44.804 ] 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "subsystem": "sock", 00:04:44.804 "config": [ 00:04:44.804 { 00:04:44.804 "method": "sock_set_default_impl", 00:04:44.804 "params": { 00:04:44.804 "impl_name": "posix" 00:04:44.804 } 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "method": "sock_impl_set_options", 00:04:44.804 "params": { 00:04:44.804 "impl_name": "ssl", 00:04:44.804 "recv_buf_size": 4096, 00:04:44.804 "send_buf_size": 4096, 00:04:44.804 "enable_recv_pipe": true, 00:04:44.804 "enable_quickack": false, 00:04:44.804 "enable_placement_id": 0, 00:04:44.804 "enable_zerocopy_send_server": true, 00:04:44.804 "enable_zerocopy_send_client": false, 00:04:44.804 "zerocopy_threshold": 0, 00:04:44.804 "tls_version": 0, 00:04:44.804 "enable_ktls": false 00:04:44.804 } 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "method": "sock_impl_set_options", 00:04:44.804 "params": { 00:04:44.804 "impl_name": "posix", 00:04:44.804 "recv_buf_size": 2097152, 00:04:44.804 "send_buf_size": 2097152, 00:04:44.804 "enable_recv_pipe": true, 00:04:44.804 "enable_quickack": false, 00:04:44.804 "enable_placement_id": 0, 00:04:44.804 "enable_zerocopy_send_server": true, 00:04:44.804 "enable_zerocopy_send_client": false, 00:04:44.804 "zerocopy_threshold": 0, 00:04:44.804 "tls_version": 0, 00:04:44.804 "enable_ktls": false 00:04:44.804 } 00:04:44.804 } 00:04:44.804 ] 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "subsystem": "vmd", 00:04:44.804 "config": [] 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "subsystem": "accel", 00:04:44.804 "config": [ 00:04:44.804 { 00:04:44.804 "method": "accel_set_options", 00:04:44.804 "params": { 00:04:44.804 "small_cache_size": 128, 00:04:44.804 "large_cache_size": 16, 00:04:44.804 "task_count": 2048, 00:04:44.804 "sequence_count": 2048, 00:04:44.804 "buf_count": 2048 00:04:44.804 } 00:04:44.804 } 00:04:44.804 ] 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "subsystem": "bdev", 00:04:44.804 "config": [ 00:04:44.804 { 00:04:44.804 "method": "bdev_set_options", 00:04:44.804 "params": { 00:04:44.804 "bdev_io_pool_size": 65535, 00:04:44.804 "bdev_io_cache_size": 256, 00:04:44.804 "bdev_auto_examine": true, 00:04:44.804 "iobuf_small_cache_size": 128, 00:04:44.804 "iobuf_large_cache_size": 16 00:04:44.804 } 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "method": "bdev_raid_set_options", 00:04:44.804 "params": { 00:04:44.804 "process_window_size_kb": 1024, 00:04:44.804 "process_max_bandwidth_mb_sec": 0 00:04:44.804 } 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "method": "bdev_iscsi_set_options", 00:04:44.804 "params": { 00:04:44.804 "timeout_sec": 30 00:04:44.804 } 00:04:44.804 }, 00:04:44.804 { 00:04:44.804 "method": "bdev_nvme_set_options", 00:04:44.804 "params": { 00:04:44.804 "action_on_timeout": "none", 00:04:44.804 "timeout_us": 0, 00:04:44.804 "timeout_admin_us": 0, 00:04:44.804 "keep_alive_timeout_ms": 10000, 00:04:44.804 "arbitration_burst": 0, 00:04:44.804 "low_priority_weight": 0, 00:04:44.804 "medium_priority_weight": 0, 00:04:44.804 "high_priority_weight": 0, 00:04:44.804 "nvme_adminq_poll_period_us": 10000, 00:04:44.804 "nvme_ioq_poll_period_us": 0, 00:04:44.804 "io_queue_requests": 0, 00:04:44.804 "delay_cmd_submit": true, 00:04:44.805 "transport_retry_count": 4, 00:04:44.805 "bdev_retry_count": 3, 00:04:44.805 "transport_ack_timeout": 0, 00:04:44.805 "ctrlr_loss_timeout_sec": 0, 00:04:44.805 "reconnect_delay_sec": 0, 00:04:44.805 "fast_io_fail_timeout_sec": 0, 00:04:44.805 "disable_auto_failback": false, 00:04:44.805 "generate_uuids": false, 00:04:44.805 "transport_tos": 0, 00:04:44.805 "nvme_error_stat": false, 00:04:44.805 "rdma_srq_size": 0, 00:04:44.805 "io_path_stat": false, 00:04:44.805 "allow_accel_sequence": false, 00:04:44.805 "rdma_max_cq_size": 0, 00:04:44.805 "rdma_cm_event_timeout_ms": 0, 00:04:44.805 "dhchap_digests": [ 00:04:44.805 "sha256", 00:04:44.805 "sha384", 00:04:44.805 "sha512" 00:04:44.805 ], 00:04:44.805 "dhchap_dhgroups": [ 00:04:44.805 "null", 00:04:44.805 "ffdhe2048", 00:04:44.805 "ffdhe3072", 00:04:44.805 "ffdhe4096", 00:04:44.805 "ffdhe6144", 00:04:44.805 "ffdhe8192" 00:04:44.805 ] 00:04:44.805 } 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "method": "bdev_nvme_set_hotplug", 00:04:44.805 "params": { 00:04:44.805 "period_us": 100000, 00:04:44.805 "enable": false 00:04:44.805 } 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "method": "bdev_wait_for_examine" 00:04:44.805 } 00:04:44.805 ] 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "subsystem": "scsi", 00:04:44.805 "config": null 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "subsystem": "scheduler", 00:04:44.805 "config": [ 00:04:44.805 { 00:04:44.805 "method": "framework_set_scheduler", 00:04:44.805 "params": { 00:04:44.805 "name": "static" 00:04:44.805 } 00:04:44.805 } 00:04:44.805 ] 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "subsystem": "vhost_scsi", 00:04:44.805 "config": [] 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "subsystem": "vhost_blk", 00:04:44.805 "config": [] 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "subsystem": "ublk", 00:04:44.805 "config": [] 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "subsystem": "nbd", 00:04:44.805 "config": [] 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "subsystem": "nvmf", 00:04:44.805 "config": [ 00:04:44.805 { 00:04:44.805 "method": "nvmf_set_config", 00:04:44.805 "params": { 00:04:44.805 "discovery_filter": "match_any", 00:04:44.805 "admin_cmd_passthru": { 00:04:44.805 "identify_ctrlr": false 00:04:44.805 }, 00:04:44.805 "dhchap_digests": [ 00:04:44.805 "sha256", 00:04:44.805 "sha384", 00:04:44.805 "sha512" 00:04:44.805 ], 00:04:44.805 "dhchap_dhgroups": [ 00:04:44.805 "null", 00:04:44.805 "ffdhe2048", 00:04:44.805 "ffdhe3072", 00:04:44.805 "ffdhe4096", 00:04:44.805 "ffdhe6144", 00:04:44.805 "ffdhe8192" 00:04:44.805 ] 00:04:44.805 } 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "method": "nvmf_set_max_subsystems", 00:04:44.805 "params": { 00:04:44.805 "max_subsystems": 1024 00:04:44.805 } 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "method": "nvmf_set_crdt", 00:04:44.805 "params": { 00:04:44.805 "crdt1": 0, 00:04:44.805 "crdt2": 0, 00:04:44.805 "crdt3": 0 00:04:44.805 } 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "method": "nvmf_create_transport", 00:04:44.805 "params": { 00:04:44.805 "trtype": "TCP", 00:04:44.805 "max_queue_depth": 128, 00:04:44.805 "max_io_qpairs_per_ctrlr": 127, 00:04:44.805 "in_capsule_data_size": 4096, 00:04:44.805 "max_io_size": 131072, 00:04:44.805 "io_unit_size": 131072, 00:04:44.805 "max_aq_depth": 128, 00:04:44.805 "num_shared_buffers": 511, 00:04:44.805 "buf_cache_size": 4294967295, 00:04:44.805 "dif_insert_or_strip": false, 00:04:44.805 "zcopy": false, 00:04:44.805 "c2h_success": true, 00:04:44.805 "sock_priority": 0, 00:04:44.805 "abort_timeout_sec": 1, 00:04:44.805 "ack_timeout": 0, 00:04:44.805 "data_wr_pool_size": 0 00:04:44.805 } 00:04:44.805 } 00:04:44.805 ] 00:04:44.805 }, 00:04:44.805 { 00:04:44.805 "subsystem": "iscsi", 00:04:44.805 "config": [ 00:04:44.805 { 00:04:44.805 "method": "iscsi_set_options", 00:04:44.805 "params": { 00:04:44.805 "node_base": "iqn.2016-06.io.spdk", 00:04:44.805 "max_sessions": 128, 00:04:44.805 "max_connections_per_session": 2, 00:04:44.805 "max_queue_depth": 64, 00:04:44.805 "default_time2wait": 2, 00:04:44.805 "default_time2retain": 20, 00:04:44.805 "first_burst_length": 8192, 00:04:44.805 "immediate_data": true, 00:04:44.805 "allow_duplicated_isid": false, 00:04:44.805 "error_recovery_level": 0, 00:04:44.805 "nop_timeout": 60, 00:04:44.805 "nop_in_interval": 30, 00:04:44.805 "disable_chap": false, 00:04:44.805 "require_chap": false, 00:04:44.805 "mutual_chap": false, 00:04:44.805 "chap_group": 0, 00:04:44.805 "max_large_datain_per_connection": 64, 00:04:44.805 "max_r2t_per_connection": 4, 00:04:44.805 "pdu_pool_size": 36864, 00:04:44.805 "immediate_data_pool_size": 16384, 00:04:44.805 "data_out_pool_size": 2048 00:04:44.805 } 00:04:44.805 } 00:04:44.805 ] 00:04:44.805 } 00:04:44.805 ] 00:04:44.805 } 00:04:44.805 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:44.805 18:12:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1122271 00:04:44.805 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1122271 ']' 00:04:44.805 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1122271 00:04:44.805 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:44.805 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.805 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1122271 00:04:44.805 18:12:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.805 18:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.805 18:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1122271' 00:04:44.805 killing process with pid 1122271 00:04:44.805 18:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1122271 00:04:44.805 18:12:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1122271 00:04:45.371 18:12:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1122443 00:04:45.371 18:12:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.371 18:12:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:50.673 18:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1122443 00:04:50.673 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1122443 ']' 00:04:50.673 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1122443 00:04:50.673 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:50.673 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.674 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1122443 00:04:50.674 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.674 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.674 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1122443' 00:04:50.674 killing process with pid 1122443 00:04:50.674 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1122443 00:04:50.674 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1122443 00:04:50.931 18:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:50.932 18:12:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:50.932 00:04:50.932 real 0m6.709s 00:04:50.932 user 0m6.323s 00:04:50.932 sys 0m0.712s 00:04:50.932 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.932 18:12:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.932 ************************************ 00:04:50.932 END TEST skip_rpc_with_json 00:04:50.932 ************************************ 00:04:50.932 18:12:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:50.932 18:12:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.932 18:12:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.932 18:12:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.932 ************************************ 00:04:50.932 START TEST skip_rpc_with_delay 00:04:50.932 ************************************ 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:50.932 [2024-10-08 18:12:09.099921] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:50.932 [2024-10-08 18:12:09.100040] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.932 00:04:50.932 real 0m0.073s 00:04:50.932 user 0m0.051s 00:04:50.932 sys 0m0.022s 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.932 18:12:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:50.932 ************************************ 00:04:50.932 END TEST skip_rpc_with_delay 00:04:50.932 ************************************ 00:04:50.932 18:12:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:50.932 18:12:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:50.932 18:12:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:50.932 18:12:09 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.932 18:12:09 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.932 18:12:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.932 ************************************ 00:04:50.932 START TEST exit_on_failed_rpc_init 00:04:50.932 ************************************ 00:04:50.932 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:50.932 18:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1123639 00:04:50.932 18:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.932 18:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1123639 00:04:50.932 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1123639 ']' 00:04:50.932 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.932 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.932 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.932 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.932 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.190 [2024-10-08 18:12:09.224464] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:04:51.190 [2024-10-08 18:12:09.224551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123639 ] 00:04:51.190 [2024-10-08 18:12:09.284665] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.190 [2024-10-08 18:12:09.392823] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:51.448 18:12:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:51.706 [2024-10-08 18:12:09.737391] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:04:51.706 [2024-10-08 18:12:09.737469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123756 ] 00:04:51.706 [2024-10-08 18:12:09.794530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.706 [2024-10-08 18:12:09.906683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.706 [2024-10-08 18:12:09.906802] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:51.706 [2024-10-08 18:12:09.906821] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:51.706 [2024-10-08 18:12:09.906839] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1123639 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1123639 ']' 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1123639 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1123639 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1123639' 00:04:51.963 killing process with pid 1123639 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1123639 00:04:51.963 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1123639 00:04:52.529 00:04:52.529 real 0m1.382s 00:04:52.529 user 0m1.560s 00:04:52.529 sys 0m0.473s 00:04:52.529 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.529 18:12:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.529 ************************************ 00:04:52.529 END TEST exit_on_failed_rpc_init 00:04:52.529 ************************************ 00:04:52.529 18:12:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.529 00:04:52.529 real 0m14.029s 00:04:52.529 user 0m13.305s 00:04:52.529 sys 0m1.731s 00:04:52.529 18:12:10 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.529 18:12:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.529 ************************************ 00:04:52.529 END TEST skip_rpc 00:04:52.529 ************************************ 00:04:52.529 18:12:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:52.529 18:12:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.529 18:12:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.529 18:12:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.529 ************************************ 00:04:52.529 START TEST rpc_client 00:04:52.529 ************************************ 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:52.529 * Looking for test storage... 00:04:52.529 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.529 18:12:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:52.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.529 --rc genhtml_branch_coverage=1 00:04:52.529 --rc genhtml_function_coverage=1 00:04:52.529 --rc genhtml_legend=1 00:04:52.529 --rc geninfo_all_blocks=1 00:04:52.529 --rc geninfo_unexecuted_blocks=1 00:04:52.529 00:04:52.529 ' 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:52.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.529 --rc genhtml_branch_coverage=1 00:04:52.529 --rc genhtml_function_coverage=1 00:04:52.529 --rc genhtml_legend=1 00:04:52.529 --rc geninfo_all_blocks=1 00:04:52.529 --rc geninfo_unexecuted_blocks=1 00:04:52.529 00:04:52.529 ' 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:52.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.529 --rc genhtml_branch_coverage=1 00:04:52.529 --rc genhtml_function_coverage=1 00:04:52.529 --rc genhtml_legend=1 00:04:52.529 --rc geninfo_all_blocks=1 00:04:52.529 --rc geninfo_unexecuted_blocks=1 00:04:52.529 00:04:52.529 ' 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:52.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.529 --rc genhtml_branch_coverage=1 00:04:52.529 --rc genhtml_function_coverage=1 00:04:52.529 --rc genhtml_legend=1 00:04:52.529 --rc geninfo_all_blocks=1 00:04:52.529 --rc geninfo_unexecuted_blocks=1 00:04:52.529 00:04:52.529 ' 00:04:52.529 18:12:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:52.529 OK 00:04:52.529 18:12:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:52.529 00:04:52.529 real 0m0.153s 00:04:52.529 user 0m0.105s 00:04:52.529 sys 0m0.058s 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.529 18:12:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:52.529 ************************************ 00:04:52.529 END TEST rpc_client 00:04:52.529 ************************************ 00:04:52.529 18:12:10 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:52.529 18:12:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.529 18:12:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.529 18:12:10 -- common/autotest_common.sh@10 -- # set +x 00:04:52.788 ************************************ 00:04:52.788 START TEST json_config 00:04:52.788 ************************************ 00:04:52.788 18:12:10 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:52.788 18:12:10 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:52.788 18:12:10 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:52.788 18:12:10 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:52.788 18:12:10 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:52.788 18:12:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.788 18:12:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.788 18:12:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.788 18:12:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.788 18:12:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.788 18:12:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.788 18:12:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.788 18:12:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.788 18:12:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.788 18:12:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.788 18:12:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.788 18:12:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:52.788 18:12:10 json_config -- scripts/common.sh@345 -- # : 1 00:04:52.788 18:12:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.788 18:12:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.788 18:12:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:52.788 18:12:10 json_config -- scripts/common.sh@353 -- # local d=1 00:04:52.788 18:12:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.788 18:12:10 json_config -- scripts/common.sh@355 -- # echo 1 00:04:52.788 18:12:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.788 18:12:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:52.788 18:12:10 json_config -- scripts/common.sh@353 -- # local d=2 00:04:52.788 18:12:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.788 18:12:10 json_config -- scripts/common.sh@355 -- # echo 2 00:04:52.788 18:12:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.788 18:12:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.788 18:12:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.788 18:12:10 json_config -- scripts/common.sh@368 -- # return 0 00:04:52.788 18:12:10 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.788 18:12:10 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:52.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.788 --rc genhtml_branch_coverage=1 00:04:52.788 --rc genhtml_function_coverage=1 00:04:52.788 --rc genhtml_legend=1 00:04:52.788 --rc geninfo_all_blocks=1 00:04:52.788 --rc geninfo_unexecuted_blocks=1 00:04:52.788 00:04:52.788 ' 00:04:52.788 18:12:10 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:52.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.788 --rc genhtml_branch_coverage=1 00:04:52.788 --rc genhtml_function_coverage=1 00:04:52.788 --rc genhtml_legend=1 00:04:52.788 --rc geninfo_all_blocks=1 00:04:52.788 --rc geninfo_unexecuted_blocks=1 00:04:52.788 00:04:52.788 ' 00:04:52.788 18:12:10 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:52.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.788 --rc genhtml_branch_coverage=1 00:04:52.788 --rc genhtml_function_coverage=1 00:04:52.788 --rc genhtml_legend=1 00:04:52.788 --rc geninfo_all_blocks=1 00:04:52.788 --rc geninfo_unexecuted_blocks=1 00:04:52.788 00:04:52.788 ' 00:04:52.788 18:12:10 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:52.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.788 --rc genhtml_branch_coverage=1 00:04:52.788 --rc genhtml_function_coverage=1 00:04:52.788 --rc genhtml_legend=1 00:04:52.788 --rc geninfo_all_blocks=1 00:04:52.788 --rc geninfo_unexecuted_blocks=1 00:04:52.788 00:04:52.788 ' 00:04:52.788 18:12:10 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.788 18:12:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:52.789 18:12:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.789 18:12:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.789 18:12:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.789 18:12:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.789 18:12:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.789 18:12:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.789 18:12:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.789 18:12:10 json_config -- paths/export.sh@5 -- # export PATH 00:04:52.789 18:12:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@51 -- # : 0 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.789 18:12:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:52.789 INFO: JSON configuration test init 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:52.789 18:12:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.789 18:12:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:52.789 18:12:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.789 18:12:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.789 18:12:10 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:52.789 18:12:10 json_config -- json_config/common.sh@9 -- # local app=target 00:04:52.789 18:12:10 json_config -- json_config/common.sh@10 -- # shift 00:04:52.789 18:12:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:52.789 18:12:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:52.789 18:12:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:52.789 18:12:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.789 18:12:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.789 18:12:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1124020 00:04:52.789 18:12:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:52.789 18:12:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:52.789 Waiting for target to run... 00:04:52.789 18:12:10 json_config -- json_config/common.sh@25 -- # waitforlisten 1124020 /var/tmp/spdk_tgt.sock 00:04:52.789 18:12:10 json_config -- common/autotest_common.sh@831 -- # '[' -z 1124020 ']' 00:04:52.789 18:12:10 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.789 18:12:10 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:52.789 18:12:10 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.789 18:12:10 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:52.789 18:12:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.789 [2024-10-08 18:12:11.031206] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:04:52.789 [2024-10-08 18:12:11.031323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124020 ] 00:04:53.356 [2024-10-08 18:12:11.368441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.356 [2024-10-08 18:12:11.451547] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.921 18:12:12 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.921 18:12:12 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:53.921 18:12:12 json_config -- json_config/common.sh@26 -- # echo '' 00:04:53.921 00:04:53.921 18:12:12 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:53.921 18:12:12 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:53.921 18:12:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.921 18:12:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.921 18:12:12 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:53.921 18:12:12 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:53.921 18:12:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.921 18:12:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.921 18:12:12 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:53.921 18:12:12 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:53.921 18:12:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:57.217 18:12:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.217 18:12:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:57.217 18:12:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@54 -- # sort 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:57.217 18:12:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.217 18:12:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:57.217 18:12:15 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:57.475 18:12:15 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:57.475 18:12:15 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:57.475 18:12:15 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:57.475 18:12:15 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:57.475 18:12:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.475 18:12:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.475 18:12:15 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:57.475 18:12:15 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:57.475 18:12:15 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:57.475 18:12:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.475 18:12:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.475 MallocForNvmf0 00:04:57.732 18:12:15 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:57.732 18:12:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:57.989 MallocForNvmf1 00:04:57.989 18:12:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:57.989 18:12:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:58.245 [2024-10-08 18:12:16.301638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.245 18:12:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:58.245 18:12:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:58.502 18:12:16 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:58.502 18:12:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:58.760 18:12:16 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:58.760 18:12:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:59.018 18:12:17 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:59.018 18:12:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:59.275 [2024-10-08 18:12:17.364980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:59.275 18:12:17 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:59.275 18:12:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.275 18:12:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.275 18:12:17 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:59.275 18:12:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.275 18:12:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.275 18:12:17 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:59.275 18:12:17 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:59.275 18:12:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:59.533 MallocBdevForConfigChangeCheck 00:04:59.533 18:12:17 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:59.533 18:12:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.533 18:12:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.533 18:12:17 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:59.533 18:12:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:00.098 18:12:18 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:00.098 INFO: shutting down applications... 00:05:00.098 18:12:18 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:00.098 18:12:18 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:00.098 18:12:18 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:00.098 18:12:18 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:01.523 Calling clear_iscsi_subsystem 00:05:01.523 Calling clear_nvmf_subsystem 00:05:01.523 Calling clear_nbd_subsystem 00:05:01.523 Calling clear_ublk_subsystem 00:05:01.523 Calling clear_vhost_blk_subsystem 00:05:01.523 Calling clear_vhost_scsi_subsystem 00:05:01.523 Calling clear_bdev_subsystem 00:05:01.523 18:12:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:01.523 18:12:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:01.523 18:12:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:01.523 18:12:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.523 18:12:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:01.523 18:12:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:02.089 18:12:20 json_config -- json_config/json_config.sh@352 -- # break 00:05:02.089 18:12:20 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:02.089 18:12:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:02.089 18:12:20 json_config -- json_config/common.sh@31 -- # local app=target 00:05:02.089 18:12:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.089 18:12:20 json_config -- json_config/common.sh@35 -- # [[ -n 1124020 ]] 00:05:02.089 18:12:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1124020 00:05:02.089 18:12:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.089 18:12:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.089 18:12:20 json_config -- json_config/common.sh@41 -- # kill -0 1124020 00:05:02.089 18:12:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.347 18:12:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.347 18:12:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.347 18:12:20 json_config -- json_config/common.sh@41 -- # kill -0 1124020 00:05:02.347 18:12:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:02.347 18:12:20 json_config -- json_config/common.sh@43 -- # break 00:05:02.347 18:12:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:02.347 18:12:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:02.347 SPDK target shutdown done 00:05:02.347 18:12:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:02.347 INFO: relaunching applications... 00:05:02.347 18:12:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.347 18:12:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:02.347 18:12:20 json_config -- json_config/common.sh@10 -- # shift 00:05:02.347 18:12:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.347 18:12:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.347 18:12:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.347 18:12:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.347 18:12:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.347 18:12:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1125341 00:05:02.347 18:12:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:02.347 18:12:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.347 Waiting for target to run... 00:05:02.347 18:12:20 json_config -- json_config/common.sh@25 -- # waitforlisten 1125341 /var/tmp/spdk_tgt.sock 00:05:02.347 18:12:20 json_config -- common/autotest_common.sh@831 -- # '[' -z 1125341 ']' 00:05:02.347 18:12:20 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.347 18:12:20 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.347 18:12:20 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.347 18:12:20 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.347 18:12:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.606 [2024-10-08 18:12:20.664797] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:02.606 [2024-10-08 18:12:20.664906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125341 ] 00:05:03.173 [2024-10-08 18:12:21.268375] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.173 [2024-10-08 18:12:21.364259] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.453 [2024-10-08 18:12:24.418978] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.453 [2024-10-08 18:12:24.451488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:07.018 18:12:25 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:07.018 18:12:25 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:07.018 18:12:25 json_config -- json_config/common.sh@26 -- # echo '' 00:05:07.018 00:05:07.018 18:12:25 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:07.018 18:12:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:07.018 INFO: Checking if target configuration is the same... 00:05:07.018 18:12:25 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.018 18:12:25 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:07.018 18:12:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.018 + '[' 2 -ne 2 ']' 00:05:07.018 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:07.018 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:07.018 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.018 +++ basename /dev/fd/62 00:05:07.018 ++ mktemp /tmp/62.XXX 00:05:07.018 + tmp_file_1=/tmp/62.v3o 00:05:07.018 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.018 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.018 + tmp_file_2=/tmp/spdk_tgt_config.json.J4G 00:05:07.018 + ret=0 00:05:07.018 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.584 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:07.584 + diff -u /tmp/62.v3o /tmp/spdk_tgt_config.json.J4G 00:05:07.584 + echo 'INFO: JSON config files are the same' 00:05:07.584 INFO: JSON config files are the same 00:05:07.584 + rm /tmp/62.v3o /tmp/spdk_tgt_config.json.J4G 00:05:07.584 + exit 0 00:05:07.584 18:12:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:07.584 18:12:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:07.584 INFO: changing configuration and checking if this can be detected... 00:05:07.584 18:12:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:07.584 18:12:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:07.841 18:12:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.841 18:12:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:07.841 18:12:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:07.841 + '[' 2 -ne 2 ']' 00:05:07.841 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:07.841 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:07.841 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:07.841 +++ basename /dev/fd/62 00:05:07.841 ++ mktemp /tmp/62.XXX 00:05:07.841 + tmp_file_1=/tmp/62.gxz 00:05:07.841 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:07.841 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:07.841 + tmp_file_2=/tmp/spdk_tgt_config.json.DA7 00:05:07.841 + ret=0 00:05:07.841 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:08.099 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:08.099 + diff -u /tmp/62.gxz /tmp/spdk_tgt_config.json.DA7 00:05:08.099 + ret=1 00:05:08.099 + echo '=== Start of file: /tmp/62.gxz ===' 00:05:08.099 + cat /tmp/62.gxz 00:05:08.099 + echo '=== End of file: /tmp/62.gxz ===' 00:05:08.099 + echo '' 00:05:08.099 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DA7 ===' 00:05:08.099 + cat /tmp/spdk_tgt_config.json.DA7 00:05:08.099 + echo '=== End of file: /tmp/spdk_tgt_config.json.DA7 ===' 00:05:08.099 + echo '' 00:05:08.099 + rm /tmp/62.gxz /tmp/spdk_tgt_config.json.DA7 00:05:08.099 + exit 1 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:08.099 INFO: configuration change detected. 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:08.099 18:12:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.099 18:12:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@324 -- # [[ -n 1125341 ]] 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:08.099 18:12:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:08.099 18:12:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:08.099 18:12:26 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:08.099 18:12:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.099 18:12:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.357 18:12:26 json_config -- json_config/json_config.sh@330 -- # killprocess 1125341 00:05:08.357 18:12:26 json_config -- common/autotest_common.sh@950 -- # '[' -z 1125341 ']' 00:05:08.357 18:12:26 json_config -- common/autotest_common.sh@954 -- # kill -0 1125341 00:05:08.357 18:12:26 json_config -- common/autotest_common.sh@955 -- # uname 00:05:08.357 18:12:26 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.357 18:12:26 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1125341 00:05:08.357 18:12:26 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.357 18:12:26 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.357 18:12:26 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1125341' 00:05:08.357 killing process with pid 1125341 00:05:08.357 18:12:26 json_config -- common/autotest_common.sh@969 -- # kill 1125341 00:05:08.357 18:12:26 json_config -- common/autotest_common.sh@974 -- # wait 1125341 00:05:10.255 18:12:28 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.255 18:12:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:10.255 18:12:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.255 18:12:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.255 18:12:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:10.255 18:12:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:10.255 INFO: Success 00:05:10.255 00:05:10.255 real 0m17.241s 00:05:10.255 user 0m19.027s 00:05:10.255 sys 0m2.721s 00:05:10.255 18:12:28 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.255 18:12:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.255 ************************************ 00:05:10.255 END TEST json_config 00:05:10.255 ************************************ 00:05:10.255 18:12:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:10.255 18:12:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.255 18:12:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.255 18:12:28 -- common/autotest_common.sh@10 -- # set +x 00:05:10.255 ************************************ 00:05:10.255 START TEST json_config_extra_key 00:05:10.255 ************************************ 00:05:10.255 18:12:28 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:10.255 18:12:28 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:10.255 18:12:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:10.255 18:12:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:10.255 18:12:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:10.255 18:12:28 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.255 18:12:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:10.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.255 --rc genhtml_branch_coverage=1 00:05:10.255 --rc genhtml_function_coverage=1 00:05:10.255 --rc genhtml_legend=1 00:05:10.255 --rc geninfo_all_blocks=1 00:05:10.255 --rc geninfo_unexecuted_blocks=1 00:05:10.255 00:05:10.255 ' 00:05:10.255 18:12:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:10.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.255 --rc genhtml_branch_coverage=1 00:05:10.255 --rc genhtml_function_coverage=1 00:05:10.255 --rc genhtml_legend=1 00:05:10.255 --rc geninfo_all_blocks=1 00:05:10.255 --rc geninfo_unexecuted_blocks=1 00:05:10.255 00:05:10.255 ' 00:05:10.255 18:12:28 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:10.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.255 --rc genhtml_branch_coverage=1 00:05:10.255 --rc genhtml_function_coverage=1 00:05:10.255 --rc genhtml_legend=1 00:05:10.255 --rc geninfo_all_blocks=1 00:05:10.255 --rc geninfo_unexecuted_blocks=1 00:05:10.255 00:05:10.255 ' 00:05:10.255 18:12:28 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:10.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.255 --rc genhtml_branch_coverage=1 00:05:10.255 --rc genhtml_function_coverage=1 00:05:10.255 --rc genhtml_legend=1 00:05:10.255 --rc geninfo_all_blocks=1 00:05:10.255 --rc geninfo_unexecuted_blocks=1 00:05:10.255 00:05:10.255 ' 00:05:10.255 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.255 18:12:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.255 18:12:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.255 18:12:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.256 18:12:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.256 18:12:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.256 18:12:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:10.256 18:12:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.256 18:12:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:10.256 18:12:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.256 18:12:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.256 18:12:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.256 18:12:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.256 18:12:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.256 18:12:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.256 18:12:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.256 18:12:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.256 18:12:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:10.256 INFO: launching applications... 00:05:10.256 18:12:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1126316 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.256 Waiting for target to run... 00:05:10.256 18:12:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1126316 /var/tmp/spdk_tgt.sock 00:05:10.256 18:12:28 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1126316 ']' 00:05:10.256 18:12:28 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.256 18:12:28 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.256 18:12:28 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.256 18:12:28 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.256 18:12:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:10.256 [2024-10-08 18:12:28.314106] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:10.256 [2024-10-08 18:12:28.314212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126316 ] 00:05:10.822 [2024-10-08 18:12:28.848084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.822 [2024-10-08 18:12:28.934902] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.079 18:12:29 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.079 18:12:29 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:11.079 18:12:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:11.079 00:05:11.079 18:12:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:11.079 INFO: shutting down applications... 00:05:11.079 18:12:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:11.079 18:12:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:11.079 18:12:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:11.079 18:12:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1126316 ]] 00:05:11.079 18:12:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1126316 00:05:11.079 18:12:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:11.079 18:12:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.079 18:12:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1126316 00:05:11.079 18:12:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.645 18:12:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.645 18:12:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.645 18:12:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1126316 00:05:11.645 18:12:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.645 18:12:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:11.645 18:12:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.645 18:12:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.645 SPDK target shutdown done 00:05:11.645 18:12:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:11.645 Success 00:05:11.645 00:05:11.645 real 0m1.679s 00:05:11.645 user 0m1.542s 00:05:11.645 sys 0m0.630s 00:05:11.645 18:12:29 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.645 18:12:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:11.645 ************************************ 00:05:11.645 END TEST json_config_extra_key 00:05:11.645 ************************************ 00:05:11.645 18:12:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.645 18:12:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.645 18:12:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.645 18:12:29 -- common/autotest_common.sh@10 -- # set +x 00:05:11.645 ************************************ 00:05:11.645 START TEST alias_rpc 00:05:11.645 ************************************ 00:05:11.645 18:12:29 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.645 * Looking for test storage... 00:05:11.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:11.645 18:12:29 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:11.645 18:12:29 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:11.645 18:12:29 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.903 18:12:29 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.903 --rc genhtml_branch_coverage=1 00:05:11.903 --rc genhtml_function_coverage=1 00:05:11.903 --rc genhtml_legend=1 00:05:11.903 --rc geninfo_all_blocks=1 00:05:11.903 --rc geninfo_unexecuted_blocks=1 00:05:11.903 00:05:11.903 ' 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.903 --rc genhtml_branch_coverage=1 00:05:11.903 --rc genhtml_function_coverage=1 00:05:11.903 --rc genhtml_legend=1 00:05:11.903 --rc geninfo_all_blocks=1 00:05:11.903 --rc geninfo_unexecuted_blocks=1 00:05:11.903 00:05:11.903 ' 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.903 --rc genhtml_branch_coverage=1 00:05:11.903 --rc genhtml_function_coverage=1 00:05:11.903 --rc genhtml_legend=1 00:05:11.903 --rc geninfo_all_blocks=1 00:05:11.903 --rc geninfo_unexecuted_blocks=1 00:05:11.903 00:05:11.903 ' 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:11.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.903 --rc genhtml_branch_coverage=1 00:05:11.903 --rc genhtml_function_coverage=1 00:05:11.903 --rc genhtml_legend=1 00:05:11.903 --rc geninfo_all_blocks=1 00:05:11.903 --rc geninfo_unexecuted_blocks=1 00:05:11.903 00:05:11.903 ' 00:05:11.903 18:12:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.903 18:12:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1126593 00:05:11.903 18:12:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.903 18:12:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1126593 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1126593 ']' 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.903 18:12:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.903 [2024-10-08 18:12:30.052247] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:11.903 [2024-10-08 18:12:30.052373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126593 ] 00:05:11.903 [2024-10-08 18:12:30.110183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.160 [2024-10-08 18:12:30.219948] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.418 18:12:30 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.418 18:12:30 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:12.418 18:12:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:12.675 18:12:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1126593 00:05:12.675 18:12:30 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1126593 ']' 00:05:12.675 18:12:30 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1126593 00:05:12.675 18:12:30 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:12.675 18:12:30 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.675 18:12:30 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1126593 00:05:12.675 18:12:30 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.675 18:12:30 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.675 18:12:30 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1126593' 00:05:12.675 killing process with pid 1126593 00:05:12.675 18:12:30 alias_rpc -- common/autotest_common.sh@969 -- # kill 1126593 00:05:12.675 18:12:30 alias_rpc -- common/autotest_common.sh@974 -- # wait 1126593 00:05:13.241 00:05:13.241 real 0m1.477s 00:05:13.241 user 0m1.592s 00:05:13.241 sys 0m0.452s 00:05:13.241 18:12:31 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.241 18:12:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.241 ************************************ 00:05:13.241 END TEST alias_rpc 00:05:13.241 ************************************ 00:05:13.241 18:12:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:13.241 18:12:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:13.241 18:12:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.241 18:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.241 18:12:31 -- common/autotest_common.sh@10 -- # set +x 00:05:13.241 ************************************ 00:05:13.241 START TEST spdkcli_tcp 00:05:13.241 ************************************ 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:13.241 * Looking for test storage... 00:05:13.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.241 18:12:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:13.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.241 --rc genhtml_branch_coverage=1 00:05:13.241 --rc genhtml_function_coverage=1 00:05:13.241 --rc genhtml_legend=1 00:05:13.241 --rc geninfo_all_blocks=1 00:05:13.241 --rc geninfo_unexecuted_blocks=1 00:05:13.241 00:05:13.241 ' 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:13.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.241 --rc genhtml_branch_coverage=1 00:05:13.241 --rc genhtml_function_coverage=1 00:05:13.241 --rc genhtml_legend=1 00:05:13.241 --rc geninfo_all_blocks=1 00:05:13.241 --rc geninfo_unexecuted_blocks=1 00:05:13.241 00:05:13.241 ' 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:13.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.241 --rc genhtml_branch_coverage=1 00:05:13.241 --rc genhtml_function_coverage=1 00:05:13.241 --rc genhtml_legend=1 00:05:13.241 --rc geninfo_all_blocks=1 00:05:13.241 --rc geninfo_unexecuted_blocks=1 00:05:13.241 00:05:13.241 ' 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:13.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.241 --rc genhtml_branch_coverage=1 00:05:13.241 --rc genhtml_function_coverage=1 00:05:13.241 --rc genhtml_legend=1 00:05:13.241 --rc geninfo_all_blocks=1 00:05:13.241 --rc geninfo_unexecuted_blocks=1 00:05:13.241 00:05:13.241 ' 00:05:13.241 18:12:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:13.241 18:12:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:13.241 18:12:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:13.241 18:12:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:13.241 18:12:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:13.241 18:12:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:13.241 18:12:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:13.241 18:12:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.499 18:12:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1126794 00:05:13.499 18:12:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:13.499 18:12:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1126794 00:05:13.499 18:12:31 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1126794 ']' 00:05:13.499 18:12:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.499 18:12:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.499 18:12:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.499 18:12:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.499 18:12:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.499 [2024-10-08 18:12:31.586062] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:13.499 [2024-10-08 18:12:31.586160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126794 ] 00:05:13.499 [2024-10-08 18:12:31.647431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.499 [2024-10-08 18:12:31.758241] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.499 [2024-10-08 18:12:31.758245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.757 18:12:32 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.757 18:12:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:13.757 18:12:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1126919 00:05:13.757 18:12:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:13.757 18:12:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:14.015 [ 00:05:14.015 "bdev_malloc_delete", 00:05:14.015 "bdev_malloc_create", 00:05:14.015 "bdev_null_resize", 00:05:14.015 "bdev_null_delete", 00:05:14.015 "bdev_null_create", 00:05:14.015 "bdev_nvme_cuse_unregister", 00:05:14.015 "bdev_nvme_cuse_register", 00:05:14.015 "bdev_opal_new_user", 00:05:14.015 "bdev_opal_set_lock_state", 00:05:14.015 "bdev_opal_delete", 00:05:14.015 "bdev_opal_get_info", 00:05:14.015 "bdev_opal_create", 00:05:14.015 "bdev_nvme_opal_revert", 00:05:14.015 "bdev_nvme_opal_init", 00:05:14.015 "bdev_nvme_send_cmd", 00:05:14.015 "bdev_nvme_set_keys", 00:05:14.015 "bdev_nvme_get_path_iostat", 00:05:14.015 "bdev_nvme_get_mdns_discovery_info", 00:05:14.015 "bdev_nvme_stop_mdns_discovery", 00:05:14.015 "bdev_nvme_start_mdns_discovery", 00:05:14.015 "bdev_nvme_set_multipath_policy", 00:05:14.015 "bdev_nvme_set_preferred_path", 00:05:14.015 "bdev_nvme_get_io_paths", 00:05:14.015 "bdev_nvme_remove_error_injection", 00:05:14.015 "bdev_nvme_add_error_injection", 00:05:14.015 "bdev_nvme_get_discovery_info", 00:05:14.015 "bdev_nvme_stop_discovery", 00:05:14.015 "bdev_nvme_start_discovery", 00:05:14.015 "bdev_nvme_get_controller_health_info", 00:05:14.015 "bdev_nvme_disable_controller", 00:05:14.015 "bdev_nvme_enable_controller", 00:05:14.015 "bdev_nvme_reset_controller", 00:05:14.015 "bdev_nvme_get_transport_statistics", 00:05:14.015 "bdev_nvme_apply_firmware", 00:05:14.015 "bdev_nvme_detach_controller", 00:05:14.015 "bdev_nvme_get_controllers", 00:05:14.015 "bdev_nvme_attach_controller", 00:05:14.015 "bdev_nvme_set_hotplug", 00:05:14.015 "bdev_nvme_set_options", 00:05:14.015 "bdev_passthru_delete", 00:05:14.015 "bdev_passthru_create", 00:05:14.015 "bdev_lvol_set_parent_bdev", 00:05:14.015 "bdev_lvol_set_parent", 00:05:14.015 "bdev_lvol_check_shallow_copy", 00:05:14.015 "bdev_lvol_start_shallow_copy", 00:05:14.015 "bdev_lvol_grow_lvstore", 00:05:14.015 "bdev_lvol_get_lvols", 00:05:14.015 "bdev_lvol_get_lvstores", 00:05:14.015 "bdev_lvol_delete", 00:05:14.015 "bdev_lvol_set_read_only", 00:05:14.015 "bdev_lvol_resize", 00:05:14.015 "bdev_lvol_decouple_parent", 00:05:14.015 "bdev_lvol_inflate", 00:05:14.015 "bdev_lvol_rename", 00:05:14.015 "bdev_lvol_clone_bdev", 00:05:14.015 "bdev_lvol_clone", 00:05:14.015 "bdev_lvol_snapshot", 00:05:14.015 "bdev_lvol_create", 00:05:14.015 "bdev_lvol_delete_lvstore", 00:05:14.015 "bdev_lvol_rename_lvstore", 00:05:14.015 "bdev_lvol_create_lvstore", 00:05:14.015 "bdev_raid_set_options", 00:05:14.015 "bdev_raid_remove_base_bdev", 00:05:14.015 "bdev_raid_add_base_bdev", 00:05:14.015 "bdev_raid_delete", 00:05:14.015 "bdev_raid_create", 00:05:14.015 "bdev_raid_get_bdevs", 00:05:14.015 "bdev_error_inject_error", 00:05:14.015 "bdev_error_delete", 00:05:14.015 "bdev_error_create", 00:05:14.015 "bdev_split_delete", 00:05:14.015 "bdev_split_create", 00:05:14.015 "bdev_delay_delete", 00:05:14.015 "bdev_delay_create", 00:05:14.015 "bdev_delay_update_latency", 00:05:14.015 "bdev_zone_block_delete", 00:05:14.015 "bdev_zone_block_create", 00:05:14.015 "blobfs_create", 00:05:14.015 "blobfs_detect", 00:05:14.015 "blobfs_set_cache_size", 00:05:14.015 "bdev_aio_delete", 00:05:14.015 "bdev_aio_rescan", 00:05:14.015 "bdev_aio_create", 00:05:14.015 "bdev_ftl_set_property", 00:05:14.015 "bdev_ftl_get_properties", 00:05:14.015 "bdev_ftl_get_stats", 00:05:14.015 "bdev_ftl_unmap", 00:05:14.015 "bdev_ftl_unload", 00:05:14.015 "bdev_ftl_delete", 00:05:14.015 "bdev_ftl_load", 00:05:14.015 "bdev_ftl_create", 00:05:14.015 "bdev_virtio_attach_controller", 00:05:14.015 "bdev_virtio_scsi_get_devices", 00:05:14.015 "bdev_virtio_detach_controller", 00:05:14.015 "bdev_virtio_blk_set_hotplug", 00:05:14.015 "bdev_iscsi_delete", 00:05:14.015 "bdev_iscsi_create", 00:05:14.015 "bdev_iscsi_set_options", 00:05:14.015 "accel_error_inject_error", 00:05:14.015 "ioat_scan_accel_module", 00:05:14.015 "dsa_scan_accel_module", 00:05:14.015 "iaa_scan_accel_module", 00:05:14.015 "vfu_virtio_create_fs_endpoint", 00:05:14.015 "vfu_virtio_create_scsi_endpoint", 00:05:14.015 "vfu_virtio_scsi_remove_target", 00:05:14.015 "vfu_virtio_scsi_add_target", 00:05:14.015 "vfu_virtio_create_blk_endpoint", 00:05:14.015 "vfu_virtio_delete_endpoint", 00:05:14.015 "keyring_file_remove_key", 00:05:14.015 "keyring_file_add_key", 00:05:14.015 "keyring_linux_set_options", 00:05:14.015 "fsdev_aio_delete", 00:05:14.015 "fsdev_aio_create", 00:05:14.015 "iscsi_get_histogram", 00:05:14.015 "iscsi_enable_histogram", 00:05:14.015 "iscsi_set_options", 00:05:14.015 "iscsi_get_auth_groups", 00:05:14.015 "iscsi_auth_group_remove_secret", 00:05:14.015 "iscsi_auth_group_add_secret", 00:05:14.015 "iscsi_delete_auth_group", 00:05:14.015 "iscsi_create_auth_group", 00:05:14.015 "iscsi_set_discovery_auth", 00:05:14.015 "iscsi_get_options", 00:05:14.015 "iscsi_target_node_request_logout", 00:05:14.016 "iscsi_target_node_set_redirect", 00:05:14.016 "iscsi_target_node_set_auth", 00:05:14.016 "iscsi_target_node_add_lun", 00:05:14.016 "iscsi_get_stats", 00:05:14.016 "iscsi_get_connections", 00:05:14.016 "iscsi_portal_group_set_auth", 00:05:14.016 "iscsi_start_portal_group", 00:05:14.016 "iscsi_delete_portal_group", 00:05:14.016 "iscsi_create_portal_group", 00:05:14.016 "iscsi_get_portal_groups", 00:05:14.016 "iscsi_delete_target_node", 00:05:14.016 "iscsi_target_node_remove_pg_ig_maps", 00:05:14.016 "iscsi_target_node_add_pg_ig_maps", 00:05:14.016 "iscsi_create_target_node", 00:05:14.016 "iscsi_get_target_nodes", 00:05:14.016 "iscsi_delete_initiator_group", 00:05:14.016 "iscsi_initiator_group_remove_initiators", 00:05:14.016 "iscsi_initiator_group_add_initiators", 00:05:14.016 "iscsi_create_initiator_group", 00:05:14.016 "iscsi_get_initiator_groups", 00:05:14.016 "nvmf_set_crdt", 00:05:14.016 "nvmf_set_config", 00:05:14.016 "nvmf_set_max_subsystems", 00:05:14.016 "nvmf_stop_mdns_prr", 00:05:14.016 "nvmf_publish_mdns_prr", 00:05:14.016 "nvmf_subsystem_get_listeners", 00:05:14.016 "nvmf_subsystem_get_qpairs", 00:05:14.016 "nvmf_subsystem_get_controllers", 00:05:14.016 "nvmf_get_stats", 00:05:14.016 "nvmf_get_transports", 00:05:14.016 "nvmf_create_transport", 00:05:14.016 "nvmf_get_targets", 00:05:14.016 "nvmf_delete_target", 00:05:14.016 "nvmf_create_target", 00:05:14.016 "nvmf_subsystem_allow_any_host", 00:05:14.016 "nvmf_subsystem_set_keys", 00:05:14.016 "nvmf_subsystem_remove_host", 00:05:14.016 "nvmf_subsystem_add_host", 00:05:14.016 "nvmf_ns_remove_host", 00:05:14.016 "nvmf_ns_add_host", 00:05:14.016 "nvmf_subsystem_remove_ns", 00:05:14.016 "nvmf_subsystem_set_ns_ana_group", 00:05:14.016 "nvmf_subsystem_add_ns", 00:05:14.016 "nvmf_subsystem_listener_set_ana_state", 00:05:14.016 "nvmf_discovery_get_referrals", 00:05:14.016 "nvmf_discovery_remove_referral", 00:05:14.016 "nvmf_discovery_add_referral", 00:05:14.016 "nvmf_subsystem_remove_listener", 00:05:14.016 "nvmf_subsystem_add_listener", 00:05:14.016 "nvmf_delete_subsystem", 00:05:14.016 "nvmf_create_subsystem", 00:05:14.016 "nvmf_get_subsystems", 00:05:14.016 "env_dpdk_get_mem_stats", 00:05:14.016 "nbd_get_disks", 00:05:14.016 "nbd_stop_disk", 00:05:14.016 "nbd_start_disk", 00:05:14.016 "ublk_recover_disk", 00:05:14.016 "ublk_get_disks", 00:05:14.016 "ublk_stop_disk", 00:05:14.016 "ublk_start_disk", 00:05:14.016 "ublk_destroy_target", 00:05:14.016 "ublk_create_target", 00:05:14.016 "virtio_blk_create_transport", 00:05:14.016 "virtio_blk_get_transports", 00:05:14.016 "vhost_controller_set_coalescing", 00:05:14.016 "vhost_get_controllers", 00:05:14.016 "vhost_delete_controller", 00:05:14.016 "vhost_create_blk_controller", 00:05:14.016 "vhost_scsi_controller_remove_target", 00:05:14.016 "vhost_scsi_controller_add_target", 00:05:14.016 "vhost_start_scsi_controller", 00:05:14.016 "vhost_create_scsi_controller", 00:05:14.016 "thread_set_cpumask", 00:05:14.016 "scheduler_set_options", 00:05:14.016 "framework_get_governor", 00:05:14.016 "framework_get_scheduler", 00:05:14.016 "framework_set_scheduler", 00:05:14.016 "framework_get_reactors", 00:05:14.016 "thread_get_io_channels", 00:05:14.016 "thread_get_pollers", 00:05:14.016 "thread_get_stats", 00:05:14.016 "framework_monitor_context_switch", 00:05:14.016 "spdk_kill_instance", 00:05:14.016 "log_enable_timestamps", 00:05:14.016 "log_get_flags", 00:05:14.016 "log_clear_flag", 00:05:14.016 "log_set_flag", 00:05:14.016 "log_get_level", 00:05:14.016 "log_set_level", 00:05:14.016 "log_get_print_level", 00:05:14.016 "log_set_print_level", 00:05:14.016 "framework_enable_cpumask_locks", 00:05:14.016 "framework_disable_cpumask_locks", 00:05:14.016 "framework_wait_init", 00:05:14.016 "framework_start_init", 00:05:14.016 "scsi_get_devices", 00:05:14.016 "bdev_get_histogram", 00:05:14.016 "bdev_enable_histogram", 00:05:14.016 "bdev_set_qos_limit", 00:05:14.016 "bdev_set_qd_sampling_period", 00:05:14.016 "bdev_get_bdevs", 00:05:14.016 "bdev_reset_iostat", 00:05:14.016 "bdev_get_iostat", 00:05:14.016 "bdev_examine", 00:05:14.016 "bdev_wait_for_examine", 00:05:14.016 "bdev_set_options", 00:05:14.016 "accel_get_stats", 00:05:14.016 "accel_set_options", 00:05:14.016 "accel_set_driver", 00:05:14.016 "accel_crypto_key_destroy", 00:05:14.016 "accel_crypto_keys_get", 00:05:14.016 "accel_crypto_key_create", 00:05:14.016 "accel_assign_opc", 00:05:14.016 "accel_get_module_info", 00:05:14.016 "accel_get_opc_assignments", 00:05:14.016 "vmd_rescan", 00:05:14.016 "vmd_remove_device", 00:05:14.016 "vmd_enable", 00:05:14.016 "sock_get_default_impl", 00:05:14.016 "sock_set_default_impl", 00:05:14.016 "sock_impl_set_options", 00:05:14.016 "sock_impl_get_options", 00:05:14.016 "iobuf_get_stats", 00:05:14.016 "iobuf_set_options", 00:05:14.016 "keyring_get_keys", 00:05:14.016 "vfu_tgt_set_base_path", 00:05:14.016 "framework_get_pci_devices", 00:05:14.016 "framework_get_config", 00:05:14.016 "framework_get_subsystems", 00:05:14.016 "fsdev_set_opts", 00:05:14.016 "fsdev_get_opts", 00:05:14.016 "trace_get_info", 00:05:14.016 "trace_get_tpoint_group_mask", 00:05:14.016 "trace_disable_tpoint_group", 00:05:14.016 "trace_enable_tpoint_group", 00:05:14.016 "trace_clear_tpoint_mask", 00:05:14.016 "trace_set_tpoint_mask", 00:05:14.016 "notify_get_notifications", 00:05:14.016 "notify_get_types", 00:05:14.016 "spdk_get_version", 00:05:14.016 "rpc_get_methods" 00:05:14.016 ] 00:05:14.274 18:12:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.274 18:12:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:14.274 18:12:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1126794 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1126794 ']' 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1126794 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1126794 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1126794' 00:05:14.274 killing process with pid 1126794 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1126794 00:05:14.274 18:12:32 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1126794 00:05:14.839 00:05:14.839 real 0m1.480s 00:05:14.839 user 0m2.555s 00:05:14.839 sys 0m0.502s 00:05:14.839 18:12:32 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.839 18:12:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.839 ************************************ 00:05:14.839 END TEST spdkcli_tcp 00:05:14.839 ************************************ 00:05:14.839 18:12:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.839 18:12:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.839 18:12:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.839 18:12:32 -- common/autotest_common.sh@10 -- # set +x 00:05:14.839 ************************************ 00:05:14.839 START TEST dpdk_mem_utility 00:05:14.839 ************************************ 00:05:14.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.839 * Looking for test storage... 00:05:14.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:14.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:14.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:14.839 18:12:32 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.839 18:12:33 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:14.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.839 --rc genhtml_branch_coverage=1 00:05:14.839 --rc genhtml_function_coverage=1 00:05:14.839 --rc genhtml_legend=1 00:05:14.839 --rc geninfo_all_blocks=1 00:05:14.839 --rc geninfo_unexecuted_blocks=1 00:05:14.839 00:05:14.839 ' 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:14.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.839 --rc genhtml_branch_coverage=1 00:05:14.839 --rc genhtml_function_coverage=1 00:05:14.839 --rc genhtml_legend=1 00:05:14.839 --rc geninfo_all_blocks=1 00:05:14.839 --rc geninfo_unexecuted_blocks=1 00:05:14.839 00:05:14.839 ' 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:14.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.839 --rc genhtml_branch_coverage=1 00:05:14.839 --rc genhtml_function_coverage=1 00:05:14.839 --rc genhtml_legend=1 00:05:14.839 --rc geninfo_all_blocks=1 00:05:14.839 --rc geninfo_unexecuted_blocks=1 00:05:14.839 00:05:14.839 ' 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:14.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.839 --rc genhtml_branch_coverage=1 00:05:14.839 --rc genhtml_function_coverage=1 00:05:14.839 --rc genhtml_legend=1 00:05:14.839 --rc geninfo_all_blocks=1 00:05:14.839 --rc geninfo_unexecuted_blocks=1 00:05:14.839 00:05:14.839 ' 00:05:14.839 18:12:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:14.839 18:12:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1127119 00:05:14.839 18:12:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.839 18:12:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1127119 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1127119 ']' 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:14.839 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.839 [2024-10-08 18:12:33.109715] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:14.839 [2024-10-08 18:12:33.109814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127119 ] 00:05:15.098 [2024-10-08 18:12:33.167446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.098 [2024-10-08 18:12:33.275157] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.356 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:15.356 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:15.356 18:12:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:15.356 18:12:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:15.356 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.356 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.356 { 00:05:15.356 "filename": "/tmp/spdk_mem_dump.txt" 00:05:15.356 } 00:05:15.356 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.356 18:12:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.356 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:15.356 1 heaps totaling size 860.000000 MiB 00:05:15.356 size: 860.000000 MiB heap id: 0 00:05:15.356 end heaps---------- 00:05:15.356 9 mempools totaling size 642.649841 MiB 00:05:15.356 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:15.356 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:15.356 size: 92.545471 MiB name: bdev_io_1127119 00:05:15.356 size: 51.011292 MiB name: evtpool_1127119 00:05:15.356 size: 50.003479 MiB name: msgpool_1127119 00:05:15.356 size: 36.509338 MiB name: fsdev_io_1127119 00:05:15.356 size: 21.763794 MiB name: PDU_Pool 00:05:15.356 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:15.356 size: 0.026123 MiB name: Session_Pool 00:05:15.356 end mempools------- 00:05:15.356 6 memzones totaling size 4.142822 MiB 00:05:15.356 size: 1.000366 MiB name: RG_ring_0_1127119 00:05:15.356 size: 1.000366 MiB name: RG_ring_1_1127119 00:05:15.356 size: 1.000366 MiB name: RG_ring_4_1127119 00:05:15.356 size: 1.000366 MiB name: RG_ring_5_1127119 00:05:15.356 size: 0.125366 MiB name: RG_ring_2_1127119 00:05:15.356 size: 0.015991 MiB name: RG_ring_3_1127119 00:05:15.356 end memzones------- 00:05:15.356 18:12:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:15.614 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:05:15.614 list of free elements. size: 13.984680 MiB 00:05:15.614 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:15.614 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:15.614 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:15.614 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:15.614 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:15.614 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:15.614 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:15.614 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:15.614 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:15.614 element at address: 0x20001d800000 with size: 0.582886 MiB 00:05:15.614 element at address: 0x200003e00000 with size: 0.495422 MiB 00:05:15.614 element at address: 0x20000d800000 with size: 0.490723 MiB 00:05:15.614 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:15.614 element at address: 0x200007000000 with size: 0.481934 MiB 00:05:15.614 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:05:15.614 element at address: 0x200003a00000 with size: 0.355042 MiB 00:05:15.614 list of standard malloc elements. size: 199.218628 MiB 00:05:15.614 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:15.614 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:15.614 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:15.614 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:15.614 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:15.614 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:15.614 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:15.614 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:15.614 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:15.614 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:15.614 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:15.614 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:15.614 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:15.614 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:15.614 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:15.614 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200003aff940 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200003eff000 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:15.614 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:15.614 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:15.614 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:15.614 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:05:15.615 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:05:15.615 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:05:15.615 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:15.615 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:15.615 list of memzone associated elements. size: 646.796692 MiB 00:05:15.615 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:15.615 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:15.615 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:15.615 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:15.615 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:15.615 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1127119_0 00:05:15.615 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:15.615 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1127119_0 00:05:15.615 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:15.615 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1127119_0 00:05:15.615 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:15.615 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1127119_0 00:05:15.615 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:15.615 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:15.615 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:15.615 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:15.615 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:15.615 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1127119 00:05:15.615 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:15.615 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1127119 00:05:15.615 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:15.615 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1127119 00:05:15.615 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:15.615 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:15.615 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:15.615 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:15.615 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:15.615 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:15.615 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:15.615 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:15.615 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:15.615 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1127119 00:05:15.615 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:15.615 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1127119 00:05:15.615 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:15.615 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1127119 00:05:15.615 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:15.615 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1127119 00:05:15.615 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:05:15.615 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1127119 00:05:15.615 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:05:15.615 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1127119 00:05:15.615 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:15.615 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:15.615 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:15.615 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:15.615 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:15.615 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:15.615 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:05:15.615 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1127119 00:05:15.615 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:15.615 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:15.615 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:05:15.615 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:15.615 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:05:15.615 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1127119 00:05:15.615 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:05:15.615 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:15.615 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:15.615 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1127119 00:05:15.615 element at address: 0x200003affa00 with size: 0.000305 MiB 00:05:15.615 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1127119 00:05:15.615 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:05:15.615 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1127119 00:05:15.615 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:05:15.615 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:15.615 18:12:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:15.615 18:12:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1127119 00:05:15.615 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1127119 ']' 00:05:15.615 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1127119 00:05:15.615 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:15.615 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.615 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1127119 00:05:15.615 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.615 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.615 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1127119' 00:05:15.615 killing process with pid 1127119 00:05:15.615 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1127119 00:05:15.615 18:12:33 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1127119 00:05:16.181 00:05:16.181 real 0m1.271s 00:05:16.181 user 0m1.246s 00:05:16.181 sys 0m0.432s 00:05:16.181 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.181 18:12:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.181 ************************************ 00:05:16.181 END TEST dpdk_mem_utility 00:05:16.181 ************************************ 00:05:16.181 18:12:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:16.181 18:12:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.181 18:12:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.181 18:12:34 -- common/autotest_common.sh@10 -- # set +x 00:05:16.181 ************************************ 00:05:16.181 START TEST event 00:05:16.181 ************************************ 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:16.181 * Looking for test storage... 00:05:16.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:16.181 18:12:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.181 18:12:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.181 18:12:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.181 18:12:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.181 18:12:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.181 18:12:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.181 18:12:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.181 18:12:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.181 18:12:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.181 18:12:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.181 18:12:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.181 18:12:34 event -- scripts/common.sh@344 -- # case "$op" in 00:05:16.181 18:12:34 event -- scripts/common.sh@345 -- # : 1 00:05:16.181 18:12:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.181 18:12:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.181 18:12:34 event -- scripts/common.sh@365 -- # decimal 1 00:05:16.181 18:12:34 event -- scripts/common.sh@353 -- # local d=1 00:05:16.181 18:12:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.181 18:12:34 event -- scripts/common.sh@355 -- # echo 1 00:05:16.181 18:12:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.181 18:12:34 event -- scripts/common.sh@366 -- # decimal 2 00:05:16.181 18:12:34 event -- scripts/common.sh@353 -- # local d=2 00:05:16.181 18:12:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.181 18:12:34 event -- scripts/common.sh@355 -- # echo 2 00:05:16.181 18:12:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.181 18:12:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.181 18:12:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.181 18:12:34 event -- scripts/common.sh@368 -- # return 0 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.181 --rc genhtml_branch_coverage=1 00:05:16.181 --rc genhtml_function_coverage=1 00:05:16.181 --rc genhtml_legend=1 00:05:16.181 --rc geninfo_all_blocks=1 00:05:16.181 --rc geninfo_unexecuted_blocks=1 00:05:16.181 00:05:16.181 ' 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.181 --rc genhtml_branch_coverage=1 00:05:16.181 --rc genhtml_function_coverage=1 00:05:16.181 --rc genhtml_legend=1 00:05:16.181 --rc geninfo_all_blocks=1 00:05:16.181 --rc geninfo_unexecuted_blocks=1 00:05:16.181 00:05:16.181 ' 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.181 --rc genhtml_branch_coverage=1 00:05:16.181 --rc genhtml_function_coverage=1 00:05:16.181 --rc genhtml_legend=1 00:05:16.181 --rc geninfo_all_blocks=1 00:05:16.181 --rc geninfo_unexecuted_blocks=1 00:05:16.181 00:05:16.181 ' 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.181 --rc genhtml_branch_coverage=1 00:05:16.181 --rc genhtml_function_coverage=1 00:05:16.181 --rc genhtml_legend=1 00:05:16.181 --rc geninfo_all_blocks=1 00:05:16.181 --rc geninfo_unexecuted_blocks=1 00:05:16.181 00:05:16.181 ' 00:05:16.181 18:12:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:16.181 18:12:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:16.181 18:12:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:16.181 18:12:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.181 18:12:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.181 ************************************ 00:05:16.181 START TEST event_perf 00:05:16.181 ************************************ 00:05:16.181 18:12:34 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:16.181 Running I/O for 1 seconds...[2024-10-08 18:12:34.398919] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:16.181 [2024-10-08 18:12:34.398983] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127330 ] 00:05:16.181 [2024-10-08 18:12:34.456409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.439 [2024-10-08 18:12:34.569209] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.439 [2024-10-08 18:12:34.569270] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.439 [2024-10-08 18:12:34.569335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.439 [2024-10-08 18:12:34.569339] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.811 Running I/O for 1 seconds... 00:05:17.811 lcore 0: 237837 00:05:17.811 lcore 1: 237836 00:05:17.811 lcore 2: 237836 00:05:17.811 lcore 3: 237837 00:05:17.811 done. 00:05:17.811 00:05:17.811 real 0m1.297s 00:05:17.811 user 0m4.202s 00:05:17.811 sys 0m0.089s 00:05:17.811 18:12:35 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.811 18:12:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.811 ************************************ 00:05:17.811 END TEST event_perf 00:05:17.811 ************************************ 00:05:17.811 18:12:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:17.811 18:12:35 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:17.811 18:12:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.811 18:12:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.811 ************************************ 00:05:17.811 START TEST event_reactor 00:05:17.811 ************************************ 00:05:17.811 18:12:35 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:17.811 [2024-10-08 18:12:35.752694] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:17.812 [2024-10-08 18:12:35.752759] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127489 ] 00:05:17.812 [2024-10-08 18:12:35.808364] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.812 [2024-10-08 18:12:35.913419] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.745 test_start 00:05:18.745 oneshot 00:05:18.745 tick 100 00:05:18.745 tick 100 00:05:18.745 tick 250 00:05:18.745 tick 100 00:05:18.745 tick 100 00:05:18.745 tick 250 00:05:18.745 tick 500 00:05:18.745 tick 100 00:05:18.745 tick 100 00:05:18.745 tick 100 00:05:18.745 tick 250 00:05:18.745 tick 100 00:05:18.745 tick 100 00:05:18.745 test_end 00:05:18.745 00:05:18.745 real 0m1.285s 00:05:18.745 user 0m1.198s 00:05:18.745 sys 0m0.083s 00:05:18.745 18:12:37 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.745 18:12:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:18.745 ************************************ 00:05:18.745 END TEST event_reactor 00:05:18.745 ************************************ 00:05:19.003 18:12:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.003 18:12:37 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:19.003 18:12:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.003 18:12:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.003 ************************************ 00:05:19.003 START TEST event_reactor_perf 00:05:19.003 ************************************ 00:05:19.003 18:12:37 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:19.003 [2024-10-08 18:12:37.085754] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:19.003 [2024-10-08 18:12:37.085823] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127641 ] 00:05:19.003 [2024-10-08 18:12:37.143503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.003 [2024-10-08 18:12:37.248487] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.375 test_start 00:05:20.375 test_end 00:05:20.375 Performance: 444399 events per second 00:05:20.375 00:05:20.375 real 0m1.289s 00:05:20.375 user 0m1.206s 00:05:20.375 sys 0m0.078s 00:05:20.375 18:12:38 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.375 18:12:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.375 ************************************ 00:05:20.375 END TEST event_reactor_perf 00:05:20.375 ************************************ 00:05:20.375 18:12:38 event -- event/event.sh@49 -- # uname -s 00:05:20.375 18:12:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:20.375 18:12:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:20.375 18:12:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.375 18:12:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.375 18:12:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.375 ************************************ 00:05:20.375 START TEST event_scheduler 00:05:20.375 ************************************ 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:20.375 * Looking for test storage... 00:05:20.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.375 18:12:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.375 --rc genhtml_branch_coverage=1 00:05:20.375 --rc genhtml_function_coverage=1 00:05:20.375 --rc genhtml_legend=1 00:05:20.375 --rc geninfo_all_blocks=1 00:05:20.375 --rc geninfo_unexecuted_blocks=1 00:05:20.375 00:05:20.375 ' 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.375 --rc genhtml_branch_coverage=1 00:05:20.375 --rc genhtml_function_coverage=1 00:05:20.375 --rc genhtml_legend=1 00:05:20.375 --rc geninfo_all_blocks=1 00:05:20.375 --rc geninfo_unexecuted_blocks=1 00:05:20.375 00:05:20.375 ' 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.375 --rc genhtml_branch_coverage=1 00:05:20.375 --rc genhtml_function_coverage=1 00:05:20.375 --rc genhtml_legend=1 00:05:20.375 --rc geninfo_all_blocks=1 00:05:20.375 --rc geninfo_unexecuted_blocks=1 00:05:20.375 00:05:20.375 ' 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:20.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.375 --rc genhtml_branch_coverage=1 00:05:20.375 --rc genhtml_function_coverage=1 00:05:20.375 --rc genhtml_legend=1 00:05:20.375 --rc geninfo_all_blocks=1 00:05:20.375 --rc geninfo_unexecuted_blocks=1 00:05:20.375 00:05:20.375 ' 00:05:20.375 18:12:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:20.375 18:12:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1127950 00:05:20.375 18:12:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:20.375 18:12:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.375 18:12:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1127950 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1127950 ']' 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.375 18:12:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.375 [2024-10-08 18:12:38.609996] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:20.375 [2024-10-08 18:12:38.610091] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127950 ] 00:05:20.633 [2024-10-08 18:12:38.667993] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.633 [2024-10-08 18:12:38.779729] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.633 [2024-10-08 18:12:38.779785] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.633 [2024-10-08 18:12:38.779846] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.633 [2024-10-08 18:12:38.779849] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.633 18:12:38 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.633 18:12:38 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:20.633 18:12:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:20.633 18:12:38 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.633 18:12:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.633 [2024-10-08 18:12:38.844699] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:20.633 [2024-10-08 18:12:38.844729] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:20.633 [2024-10-08 18:12:38.844746] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:20.633 [2024-10-08 18:12:38.844758] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:20.633 [2024-10-08 18:12:38.844768] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:20.633 18:12:38 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.633 18:12:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:20.633 18:12:38 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.633 18:12:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 [2024-10-08 18:12:38.944951] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:20.894 18:12:38 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:20.894 18:12:38 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.894 18:12:38 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.894 18:12:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 ************************************ 00:05:20.894 START TEST scheduler_create_thread 00:05:20.894 ************************************ 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 2 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 3 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 4 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 5 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 6 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 7 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 8 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 9 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 10 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.894 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.522 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.522 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:21.522 18:12:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:21.522 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.522 18:12:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.455 18:12:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.455 00:05:22.455 real 0m1.757s 00:05:22.455 user 0m0.008s 00:05:22.455 sys 0m0.006s 00:05:22.455 18:12:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.455 18:12:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.455 ************************************ 00:05:22.455 END TEST scheduler_create_thread 00:05:22.455 ************************************ 00:05:22.713 18:12:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:22.713 18:12:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1127950 00:05:22.713 18:12:40 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1127950 ']' 00:05:22.713 18:12:40 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1127950 00:05:22.713 18:12:40 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:22.713 18:12:40 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.713 18:12:40 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1127950 00:05:22.713 18:12:40 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:22.713 18:12:40 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:22.713 18:12:40 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1127950' 00:05:22.713 killing process with pid 1127950 00:05:22.713 18:12:40 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1127950 00:05:22.713 18:12:40 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1127950 00:05:22.970 [2024-10-08 18:12:41.213043] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:23.229 00:05:23.229 real 0m3.068s 00:05:23.229 user 0m3.944s 00:05:23.229 sys 0m0.360s 00:05:23.229 18:12:41 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.230 18:12:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.230 ************************************ 00:05:23.230 END TEST event_scheduler 00:05:23.230 ************************************ 00:05:23.230 18:12:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:23.230 18:12:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:23.230 18:12:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.230 18:12:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.230 18:12:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.489 ************************************ 00:05:23.489 START TEST app_repeat 00:05:23.489 ************************************ 00:05:23.489 18:12:41 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1128286 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1128286' 00:05:23.489 Process app_repeat pid: 1128286 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:23.489 spdk_app_start Round 0 00:05:23.489 18:12:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1128286 /var/tmp/spdk-nbd.sock 00:05:23.489 18:12:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1128286 ']' 00:05:23.489 18:12:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.489 18:12:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.489 18:12:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.489 18:12:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.489 18:12:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.489 [2024-10-08 18:12:41.565872] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:23.489 [2024-10-08 18:12:41.565940] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128286 ] 00:05:23.489 [2024-10-08 18:12:41.621437] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.489 [2024-10-08 18:12:41.722707] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.489 [2024-10-08 18:12:41.722711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.747 18:12:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.747 18:12:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:23.747 18:12:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.005 Malloc0 00:05:24.005 18:12:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.263 Malloc1 00:05:24.263 18:12:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.263 18:12:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.521 /dev/nbd0 00:05:24.521 18:12:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.521 18:12:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.521 1+0 records in 00:05:24.521 1+0 records out 00:05:24.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224472 s, 18.2 MB/s 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:24.521 18:12:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:24.521 18:12:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.521 18:12:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.521 18:12:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.783 /dev/nbd1 00:05:25.041 18:12:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.041 18:12:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.041 1+0 records in 00:05:25.041 1+0 records out 00:05:25.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249984 s, 16.4 MB/s 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:25.041 18:12:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:25.041 18:12:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.041 18:12:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.041 18:12:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.041 18:12:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.041 18:12:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.299 { 00:05:25.299 "nbd_device": "/dev/nbd0", 00:05:25.299 "bdev_name": "Malloc0" 00:05:25.299 }, 00:05:25.299 { 00:05:25.299 "nbd_device": "/dev/nbd1", 00:05:25.299 "bdev_name": "Malloc1" 00:05:25.299 } 00:05:25.299 ]' 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.299 { 00:05:25.299 "nbd_device": "/dev/nbd0", 00:05:25.299 "bdev_name": "Malloc0" 00:05:25.299 }, 00:05:25.299 { 00:05:25.299 "nbd_device": "/dev/nbd1", 00:05:25.299 "bdev_name": "Malloc1" 00:05:25.299 } 00:05:25.299 ]' 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.299 /dev/nbd1' 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.299 /dev/nbd1' 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.299 256+0 records in 00:05:25.299 256+0 records out 00:05:25.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00384873 s, 272 MB/s 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.299 256+0 records in 00:05:25.299 256+0 records out 00:05:25.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200567 s, 52.3 MB/s 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.299 256+0 records in 00:05:25.299 256+0 records out 00:05:25.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216577 s, 48.4 MB/s 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.299 18:12:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.555 18:12:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.555 18:12:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.555 18:12:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.555 18:12:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.555 18:12:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.555 18:12:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.555 18:12:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.555 18:12:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.556 18:12:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.556 18:12:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.812 18:12:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.812 18:12:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.812 18:12:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.812 18:12:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.812 18:12:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.812 18:12:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.812 18:12:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.813 18:12:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.813 18:12:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.813 18:12:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.813 18:12:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.070 18:12:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.070 18:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.070 18:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.327 18:12:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.327 18:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.327 18:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.327 18:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.327 18:12:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.327 18:12:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.327 18:12:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.327 18:12:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.327 18:12:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.327 18:12:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.584 18:12:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.842 [2024-10-08 18:12:44.955312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.842 [2024-10-08 18:12:45.057460] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.842 [2024-10-08 18:12:45.057465] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.842 [2024-10-08 18:12:45.116876] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.842 [2024-10-08 18:12:45.116963] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.120 18:12:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.121 18:12:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:30.121 spdk_app_start Round 1 00:05:30.121 18:12:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1128286 /var/tmp/spdk-nbd.sock 00:05:30.121 18:12:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1128286 ']' 00:05:30.121 18:12:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.121 18:12:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.121 18:12:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.121 18:12:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.121 18:12:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.121 18:12:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.121 18:12:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:30.121 18:12:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.121 Malloc0 00:05:30.121 18:12:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.379 Malloc1 00:05:30.379 18:12:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.379 18:12:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.637 /dev/nbd0 00:05:30.637 18:12:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.637 18:12:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.637 1+0 records in 00:05:30.637 1+0 records out 00:05:30.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000142892 s, 28.7 MB/s 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.637 18:12:48 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.637 18:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.637 18:12:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.637 18:12:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.895 /dev/nbd1 00:05:30.895 18:12:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.895 18:12:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.895 18:12:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:30.895 18:12:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.895 18:12:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.896 18:12:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.896 18:12:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:30.896 18:12:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.896 18:12:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.896 18:12:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.896 18:12:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.153 1+0 records in 00:05:31.153 1+0 records out 00:05:31.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017415 s, 23.5 MB/s 00:05:31.153 18:12:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.153 18:12:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:31.153 18:12:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.153 18:12:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:31.153 18:12:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:31.153 18:12:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.153 18:12:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.153 18:12:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.153 18:12:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.153 18:12:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.411 { 00:05:31.411 "nbd_device": "/dev/nbd0", 00:05:31.411 "bdev_name": "Malloc0" 00:05:31.411 }, 00:05:31.411 { 00:05:31.411 "nbd_device": "/dev/nbd1", 00:05:31.411 "bdev_name": "Malloc1" 00:05:31.411 } 00:05:31.411 ]' 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.411 { 00:05:31.411 "nbd_device": "/dev/nbd0", 00:05:31.411 "bdev_name": "Malloc0" 00:05:31.411 }, 00:05:31.411 { 00:05:31.411 "nbd_device": "/dev/nbd1", 00:05:31.411 "bdev_name": "Malloc1" 00:05:31.411 } 00:05:31.411 ]' 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.411 /dev/nbd1' 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.411 /dev/nbd1' 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.411 256+0 records in 00:05:31.411 256+0 records out 00:05:31.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486039 s, 216 MB/s 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.411 256+0 records in 00:05:31.411 256+0 records out 00:05:31.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195543 s, 53.6 MB/s 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.411 256+0 records in 00:05:31.411 256+0 records out 00:05:31.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214302 s, 48.9 MB/s 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.411 18:12:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.670 18:12:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.670 18:12:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.670 18:12:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.670 18:12:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.670 18:12:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.670 18:12:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.670 18:12:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.670 18:12:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.670 18:12:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.670 18:12:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.928 18:12:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.185 18:12:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.185 18:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.185 18:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.185 18:12:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.185 18:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.185 18:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.443 18:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.443 18:12:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.443 18:12:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.443 18:12:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.443 18:12:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.443 18:12:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.443 18:12:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.700 18:12:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.958 [2024-10-08 18:12:51.022019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.958 [2024-10-08 18:12:51.124227] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.958 [2024-10-08 18:12:51.124229] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.958 [2024-10-08 18:12:51.184966] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.958 [2024-10-08 18:12:51.185040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.483 18:12:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:35.483 18:12:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.483 spdk_app_start Round 2 00:05:35.483 18:12:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1128286 /var/tmp/spdk-nbd.sock 00:05:35.483 18:12:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1128286 ']' 00:05:35.483 18:12:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.483 18:12:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.483 18:12:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.483 18:12:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.483 18:12:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 18:12:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.048 18:12:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:36.048 18:12:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.048 Malloc0 00:05:36.306 18:12:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.564 Malloc1 00:05:36.564 18:12:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.564 18:12:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.822 /dev/nbd0 00:05:36.822 18:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.822 18:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.822 18:12:54 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:36.822 18:12:54 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:36.822 18:12:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:36.822 18:12:54 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:36.822 18:12:54 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:36.822 18:12:54 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:36.822 18:12:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:36.822 18:12:54 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:36.822 18:12:54 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.822 1+0 records in 00:05:36.822 1+0 records out 00:05:36.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256902 s, 15.9 MB/s 00:05:36.822 18:12:54 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.823 18:12:54 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:36.823 18:12:54 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:36.823 18:12:54 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:36.823 18:12:54 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:36.823 18:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.823 18:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.823 18:12:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:37.081 /dev/nbd1 00:05:37.081 18:12:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:37.081 18:12:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.081 1+0 records in 00:05:37.081 1+0 records out 00:05:37.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197775 s, 20.7 MB/s 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.081 18:12:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.081 18:12:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.081 18:12:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.081 18:12:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.081 18:12:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.081 18:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:37.340 { 00:05:37.340 "nbd_device": "/dev/nbd0", 00:05:37.340 "bdev_name": "Malloc0" 00:05:37.340 }, 00:05:37.340 { 00:05:37.340 "nbd_device": "/dev/nbd1", 00:05:37.340 "bdev_name": "Malloc1" 00:05:37.340 } 00:05:37.340 ]' 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:37.340 { 00:05:37.340 "nbd_device": "/dev/nbd0", 00:05:37.340 "bdev_name": "Malloc0" 00:05:37.340 }, 00:05:37.340 { 00:05:37.340 "nbd_device": "/dev/nbd1", 00:05:37.340 "bdev_name": "Malloc1" 00:05:37.340 } 00:05:37.340 ]' 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:37.340 /dev/nbd1' 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:37.340 /dev/nbd1' 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:37.340 256+0 records in 00:05:37.340 256+0 records out 00:05:37.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514027 s, 204 MB/s 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.340 18:12:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:37.599 256+0 records in 00:05:37.599 256+0 records out 00:05:37.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203906 s, 51.4 MB/s 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:37.599 256+0 records in 00:05:37.599 256+0 records out 00:05:37.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223282 s, 47.0 MB/s 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.599 18:12:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.857 18:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.857 18:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.857 18:12:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.857 18:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.857 18:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.857 18:12:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.857 18:12:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:37.857 18:12:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.857 18:12:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.857 18:12:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.115 18:12:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:38.372 18:12:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:38.372 18:12:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:38.630 18:12:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:38.888 [2024-10-08 18:12:57.140392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.146 [2024-10-08 18:12:57.246021] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.146 [2024-10-08 18:12:57.246025] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.146 [2024-10-08 18:12:57.305915] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.146 [2024-10-08 18:12:57.305989] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.674 18:12:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1128286 /var/tmp/spdk-nbd.sock 00:05:41.674 18:12:59 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1128286 ']' 00:05:41.674 18:12:59 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.674 18:12:59 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.674 18:12:59 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.674 18:12:59 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.674 18:12:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:41.932 18:13:00 event.app_repeat -- event/event.sh@39 -- # killprocess 1128286 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1128286 ']' 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1128286 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1128286 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1128286' 00:05:41.932 killing process with pid 1128286 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1128286 00:05:41.932 18:13:00 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1128286 00:05:42.193 spdk_app_start is called in Round 0. 00:05:42.193 Shutdown signal received, stop current app iteration 00:05:42.193 Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 reinitialization... 00:05:42.193 spdk_app_start is called in Round 1. 00:05:42.193 Shutdown signal received, stop current app iteration 00:05:42.193 Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 reinitialization... 00:05:42.193 spdk_app_start is called in Round 2. 00:05:42.193 Shutdown signal received, stop current app iteration 00:05:42.193 Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 reinitialization... 00:05:42.193 spdk_app_start is called in Round 3. 00:05:42.193 Shutdown signal received, stop current app iteration 00:05:42.193 18:13:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:42.193 18:13:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:42.193 00:05:42.193 real 0m18.905s 00:05:42.193 user 0m41.312s 00:05:42.193 sys 0m3.316s 00:05:42.193 18:13:00 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:42.193 18:13:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.193 ************************************ 00:05:42.193 END TEST app_repeat 00:05:42.193 ************************************ 00:05:42.193 18:13:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:42.193 18:13:00 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:42.193 18:13:00 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.193 18:13:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.193 18:13:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.451 ************************************ 00:05:42.451 START TEST cpu_locks 00:05:42.451 ************************************ 00:05:42.451 18:13:00 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:42.451 * Looking for test storage... 00:05:42.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:42.451 18:13:00 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.451 18:13:00 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.451 18:13:00 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.451 18:13:00 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.451 18:13:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.451 18:13:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.451 18:13:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.452 18:13:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:42.452 18:13:00 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.452 18:13:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.452 --rc genhtml_branch_coverage=1 00:05:42.452 --rc genhtml_function_coverage=1 00:05:42.452 --rc genhtml_legend=1 00:05:42.452 --rc geninfo_all_blocks=1 00:05:42.452 --rc geninfo_unexecuted_blocks=1 00:05:42.452 00:05:42.452 ' 00:05:42.452 18:13:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.452 --rc genhtml_branch_coverage=1 00:05:42.452 --rc genhtml_function_coverage=1 00:05:42.452 --rc genhtml_legend=1 00:05:42.452 --rc geninfo_all_blocks=1 00:05:42.452 --rc geninfo_unexecuted_blocks=1 00:05:42.452 00:05:42.452 ' 00:05:42.452 18:13:00 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.452 --rc genhtml_branch_coverage=1 00:05:42.452 --rc genhtml_function_coverage=1 00:05:42.452 --rc genhtml_legend=1 00:05:42.452 --rc geninfo_all_blocks=1 00:05:42.452 --rc geninfo_unexecuted_blocks=1 00:05:42.452 00:05:42.452 ' 00:05:42.452 18:13:00 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.452 --rc genhtml_branch_coverage=1 00:05:42.452 --rc genhtml_function_coverage=1 00:05:42.452 --rc genhtml_legend=1 00:05:42.452 --rc geninfo_all_blocks=1 00:05:42.452 --rc geninfo_unexecuted_blocks=1 00:05:42.452 00:05:42.452 ' 00:05:42.452 18:13:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.452 18:13:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.452 18:13:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.452 18:13:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.452 18:13:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:42.452 18:13:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:42.452 18:13:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.452 ************************************ 00:05:42.452 START TEST default_locks 00:05:42.452 ************************************ 00:05:42.452 18:13:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:42.452 18:13:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1130771 00:05:42.452 18:13:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.452 18:13:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1130771 00:05:42.452 18:13:00 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1130771 ']' 00:05:42.452 18:13:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.452 18:13:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.452 18:13:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.452 18:13:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.452 18:13:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.452 [2024-10-08 18:13:00.737727] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:42.452 [2024-10-08 18:13:00.737818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130771 ] 00:05:42.710 [2024-10-08 18:13:00.798536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.710 [2024-10-08 18:13:00.908872] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.968 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.968 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:42.968 18:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1130771 00:05:42.968 18:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1130771 00:05:42.968 18:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.225 lslocks: write error 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1130771 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1130771 ']' 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1130771 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1130771 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1130771' 00:05:43.225 killing process with pid 1130771 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1130771 00:05:43.225 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1130771 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1130771 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1130771 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1130771 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1130771 ']' 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1130771) - No such process 00:05:43.791 ERROR: process (pid: 1130771) is no longer running 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:43.791 00:05:43.791 real 0m1.259s 00:05:43.791 user 0m1.205s 00:05:43.791 sys 0m0.538s 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.791 18:13:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.791 ************************************ 00:05:43.791 END TEST default_locks 00:05:43.791 ************************************ 00:05:43.791 18:13:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:43.791 18:13:01 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.791 18:13:01 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.791 18:13:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.791 ************************************ 00:05:43.791 START TEST default_locks_via_rpc 00:05:43.791 ************************************ 00:05:43.791 18:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:43.791 18:13:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1130949 00:05:43.791 18:13:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.791 18:13:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1130949 00:05:43.791 18:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1130949 ']' 00:05:43.791 18:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.791 18:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.791 18:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.791 18:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.791 18:13:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.791 [2024-10-08 18:13:02.048484] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:43.791 [2024-10-08 18:13:02.048566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130949 ] 00:05:44.069 [2024-10-08 18:13:02.108332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.069 [2024-10-08 18:13:02.216008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1130949 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1130949 00:05:44.328 18:13:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1130949 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1130949 ']' 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1130949 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1130949 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1130949' 00:05:44.586 killing process with pid 1130949 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1130949 00:05:44.586 18:13:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1130949 00:05:45.152 00:05:45.152 real 0m1.298s 00:05:45.152 user 0m1.249s 00:05:45.152 sys 0m0.528s 00:05:45.152 18:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.152 18:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.152 ************************************ 00:05:45.152 END TEST default_locks_via_rpc 00:05:45.152 ************************************ 00:05:45.152 18:13:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:45.152 18:13:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.152 18:13:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.152 18:13:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.152 ************************************ 00:05:45.152 START TEST non_locking_app_on_locked_coremask 00:05:45.152 ************************************ 00:05:45.152 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:45.152 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1131221 00:05:45.152 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.152 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1131221 /var/tmp/spdk.sock 00:05:45.152 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1131221 ']' 00:05:45.152 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.152 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.152 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.152 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.152 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.152 [2024-10-08 18:13:03.398649] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:45.152 [2024-10-08 18:13:03.398737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131221 ] 00:05:45.411 [2024-10-08 18:13:03.457163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.411 [2024-10-08 18:13:03.568552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1131230 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1131230 /var/tmp/spdk2.sock 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1131230 ']' 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.670 18:13:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.670 [2024-10-08 18:13:03.895454] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:45.670 [2024-10-08 18:13:03.895530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131230 ] 00:05:45.929 [2024-10-08 18:13:03.978837] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.929 [2024-10-08 18:13:03.978865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.929 [2024-10-08 18:13:04.191172] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.863 18:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.863 18:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:46.864 18:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1131221 00:05:46.864 18:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1131221 00:05:46.864 18:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.429 lslocks: write error 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1131221 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1131221 ']' 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1131221 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1131221 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1131221' 00:05:47.429 killing process with pid 1131221 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1131221 00:05:47.429 18:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1131221 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1131230 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1131230 ']' 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1131230 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1131230 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1131230' 00:05:48.363 killing process with pid 1131230 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1131230 00:05:48.363 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1131230 00:05:48.623 00:05:48.623 real 0m3.536s 00:05:48.623 user 0m3.754s 00:05:48.623 sys 0m1.134s 00:05:48.623 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.623 18:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.623 ************************************ 00:05:48.623 END TEST non_locking_app_on_locked_coremask 00:05:48.623 ************************************ 00:05:48.623 18:13:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:48.623 18:13:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.623 18:13:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.623 18:13:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.882 ************************************ 00:05:48.882 START TEST locking_app_on_unlocked_coremask 00:05:48.882 ************************************ 00:05:48.882 18:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:48.882 18:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1131661 00:05:48.882 18:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:48.882 18:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1131661 /var/tmp/spdk.sock 00:05:48.882 18:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1131661 ']' 00:05:48.882 18:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.882 18:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.882 18:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.882 18:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.882 18:13:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.882 [2024-10-08 18:13:06.979641] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:48.882 [2024-10-08 18:13:06.979743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131661 ] 00:05:48.882 [2024-10-08 18:13:07.034686] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.882 [2024-10-08 18:13:07.034718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.882 [2024-10-08 18:13:07.136466] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1131672 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1131672 /var/tmp/spdk2.sock 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1131672 ']' 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.140 18:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.398 [2024-10-08 18:13:07.452444] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:49.398 [2024-10-08 18:13:07.452528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131672 ] 00:05:49.398 [2024-10-08 18:13:07.536355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.656 [2024-10-08 18:13:07.757991] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.227 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.227 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.227 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1131672 00:05:50.227 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1131672 00:05:50.227 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.793 lslocks: write error 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1131661 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1131661 ']' 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1131661 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1131661 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1131661' 00:05:50.793 killing process with pid 1131661 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1131661 00:05:50.793 18:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1131661 00:05:51.727 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1131672 00:05:51.727 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1131672 ']' 00:05:51.727 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1131672 00:05:51.728 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.728 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.728 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1131672 00:05:51.728 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.728 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.728 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1131672' 00:05:51.728 killing process with pid 1131672 00:05:51.728 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1131672 00:05:51.728 18:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1131672 00:05:52.292 00:05:52.292 real 0m3.435s 00:05:52.292 user 0m3.638s 00:05:52.292 sys 0m1.087s 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.292 ************************************ 00:05:52.292 END TEST locking_app_on_unlocked_coremask 00:05:52.292 ************************************ 00:05:52.292 18:13:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:52.292 18:13:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.292 18:13:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.292 18:13:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.292 ************************************ 00:05:52.292 START TEST locking_app_on_locked_coremask 00:05:52.292 ************************************ 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1132095 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1132095 /var/tmp/spdk.sock 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1132095 ']' 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.292 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.292 [2024-10-08 18:13:10.469324] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:52.292 [2024-10-08 18:13:10.469402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132095 ] 00:05:52.292 [2024-10-08 18:13:10.529357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.581 [2024-10-08 18:13:10.645504] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1132119 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1132119 /var/tmp/spdk2.sock 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1132119 /var/tmp/spdk2.sock 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1132119 /var/tmp/spdk2.sock 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1132119 ']' 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.865 18:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.865 [2024-10-08 18:13:10.972043] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:52.865 [2024-10-08 18:13:10.972141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132119 ] 00:05:52.865 [2024-10-08 18:13:11.058359] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1132095 has claimed it. 00:05:52.865 [2024-10-08 18:13:11.058420] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1132119) - No such process 00:05:53.431 ERROR: process (pid: 1132119) is no longer running 00:05:53.431 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.431 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:53.431 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:53.431 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:53.431 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:53.431 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:53.431 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1132095 00:05:53.431 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1132095 00:05:53.431 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.996 lslocks: write error 00:05:53.996 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1132095 00:05:53.996 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1132095 ']' 00:05:53.996 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1132095 00:05:53.996 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:53.996 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.996 18:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132095 00:05:53.996 18:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.996 18:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.997 18:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132095' 00:05:53.997 killing process with pid 1132095 00:05:53.997 18:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1132095 00:05:53.997 18:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1132095 00:05:54.255 00:05:54.255 real 0m2.084s 00:05:54.255 user 0m2.300s 00:05:54.255 sys 0m0.641s 00:05:54.255 18:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.255 18:13:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.255 ************************************ 00:05:54.255 END TEST locking_app_on_locked_coremask 00:05:54.255 ************************************ 00:05:54.255 18:13:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:54.255 18:13:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.255 18:13:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.255 18:13:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.514 ************************************ 00:05:54.514 START TEST locking_overlapped_coremask 00:05:54.514 ************************************ 00:05:54.514 18:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:54.514 18:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1132397 00:05:54.514 18:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:54.514 18:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1132397 /var/tmp/spdk.sock 00:05:54.514 18:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1132397 ']' 00:05:54.514 18:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.514 18:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.514 18:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.514 18:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.514 18:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.514 [2024-10-08 18:13:12.608451] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:54.514 [2024-10-08 18:13:12.608553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132397 ] 00:05:54.514 [2024-10-08 18:13:12.663537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.514 [2024-10-08 18:13:12.765983] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.514 [2024-10-08 18:13:12.766089] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.514 [2024-10-08 18:13:12.766098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1132415 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1132415 /var/tmp/spdk2.sock 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1132415 /var/tmp/spdk2.sock 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1132415 /var/tmp/spdk2.sock 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1132415 ']' 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.772 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.030 [2024-10-08 18:13:13.088196] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:55.030 [2024-10-08 18:13:13.088296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132415 ] 00:05:55.030 [2024-10-08 18:13:13.178107] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1132397 has claimed it. 00:05:55.030 [2024-10-08 18:13:13.178173] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1132415) - No such process 00:05:55.596 ERROR: process (pid: 1132415) is no longer running 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1132397 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1132397 ']' 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1132397 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132397 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132397' 00:05:55.596 killing process with pid 1132397 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1132397 00:05:55.596 18:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1132397 00:05:56.162 00:05:56.162 real 0m1.762s 00:05:56.162 user 0m4.770s 00:05:56.162 sys 0m0.459s 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.162 ************************************ 00:05:56.162 END TEST locking_overlapped_coremask 00:05:56.162 ************************************ 00:05:56.162 18:13:14 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:56.162 18:13:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.162 18:13:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.162 18:13:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.162 ************************************ 00:05:56.162 START TEST locking_overlapped_coremask_via_rpc 00:05:56.162 ************************************ 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1132582 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1132582 /var/tmp/spdk.sock 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1132582 ']' 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.162 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.162 [2024-10-08 18:13:14.428488] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:56.162 [2024-10-08 18:13:14.428575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132582 ] 00:05:56.420 [2024-10-08 18:13:14.490670] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.420 [2024-10-08 18:13:14.490708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.420 [2024-10-08 18:13:14.602887] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.420 [2024-10-08 18:13:14.602944] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.420 [2024-10-08 18:13:14.602948] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1132702 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1132702 /var/tmp/spdk2.sock 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1132702 ']' 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.678 18:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.678 [2024-10-08 18:13:14.943995] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:56.678 [2024-10-08 18:13:14.944081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132702 ] 00:05:56.936 [2024-10-08 18:13:15.032464] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.936 [2024-10-08 18:13:15.032513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.194 [2024-10-08 18:13:15.251764] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.194 [2024-10-08 18:13:15.255382] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:05:57.194 [2024-10-08 18:13:15.255385] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.759 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.759 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.760 [2024-10-08 18:13:15.926396] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1132582 has claimed it. 00:05:57.760 request: 00:05:57.760 { 00:05:57.760 "method": "framework_enable_cpumask_locks", 00:05:57.760 "req_id": 1 00:05:57.760 } 00:05:57.760 Got JSON-RPC error response 00:05:57.760 response: 00:05:57.760 { 00:05:57.760 "code": -32603, 00:05:57.760 "message": "Failed to claim CPU core: 2" 00:05:57.760 } 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1132582 /var/tmp/spdk.sock 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1132582 ']' 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.760 18:13:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.017 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.017 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.018 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1132702 /var/tmp/spdk2.sock 00:05:58.018 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1132702 ']' 00:05:58.018 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.018 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.018 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.018 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.018 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.276 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.276 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:58.276 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:58.276 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.276 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.276 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.276 00:05:58.276 real 0m2.088s 00:05:58.276 user 0m1.105s 00:05:58.276 sys 0m0.183s 00:05:58.276 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.276 18:13:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.276 ************************************ 00:05:58.276 END TEST locking_overlapped_coremask_via_rpc 00:05:58.276 ************************************ 00:05:58.276 18:13:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:58.276 18:13:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1132582 ]] 00:05:58.276 18:13:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1132582 00:05:58.276 18:13:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1132582 ']' 00:05:58.276 18:13:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1132582 00:05:58.276 18:13:16 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:58.276 18:13:16 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.276 18:13:16 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132582 00:05:58.276 18:13:16 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.276 18:13:16 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.276 18:13:16 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132582' 00:05:58.276 killing process with pid 1132582 00:05:58.276 18:13:16 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1132582 00:05:58.276 18:13:16 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1132582 00:05:58.840 18:13:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1132702 ]] 00:05:58.841 18:13:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1132702 00:05:58.841 18:13:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1132702 ']' 00:05:58.841 18:13:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1132702 00:05:58.841 18:13:16 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:58.841 18:13:17 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.841 18:13:17 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132702 00:05:58.841 18:13:17 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:58.841 18:13:17 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:58.841 18:13:17 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132702' 00:05:58.841 killing process with pid 1132702 00:05:58.841 18:13:17 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1132702 00:05:58.841 18:13:17 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1132702 00:05:59.406 18:13:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.406 18:13:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:59.406 18:13:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1132582 ]] 00:05:59.406 18:13:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1132582 00:05:59.406 18:13:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1132582 ']' 00:05:59.406 18:13:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1132582 00:05:59.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1132582) - No such process 00:05:59.406 18:13:17 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1132582 is not found' 00:05:59.406 Process with pid 1132582 is not found 00:05:59.406 18:13:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1132702 ]] 00:05:59.406 18:13:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1132702 00:05:59.406 18:13:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1132702 ']' 00:05:59.406 18:13:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1132702 00:05:59.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1132702) - No such process 00:05:59.406 18:13:17 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1132702 is not found' 00:05:59.406 Process with pid 1132702 is not found 00:05:59.406 18:13:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.406 00:05:59.406 real 0m17.027s 00:05:59.406 user 0m29.756s 00:05:59.406 sys 0m5.561s 00:05:59.406 18:13:17 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.406 18:13:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.406 ************************************ 00:05:59.406 END TEST cpu_locks 00:05:59.406 ************************************ 00:05:59.406 00:05:59.406 real 0m43.327s 00:05:59.406 user 1m21.838s 00:05:59.406 sys 0m9.750s 00:05:59.406 18:13:17 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.406 18:13:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.406 ************************************ 00:05:59.406 END TEST event 00:05:59.406 ************************************ 00:05:59.406 18:13:17 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:59.406 18:13:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.406 18:13:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.406 18:13:17 -- common/autotest_common.sh@10 -- # set +x 00:05:59.406 ************************************ 00:05:59.406 START TEST thread 00:05:59.406 ************************************ 00:05:59.406 18:13:17 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:59.406 * Looking for test storage... 00:05:59.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:59.406 18:13:17 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:59.406 18:13:17 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:05:59.406 18:13:17 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:59.665 18:13:17 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:59.665 18:13:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.665 18:13:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.665 18:13:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.665 18:13:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.665 18:13:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.665 18:13:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.665 18:13:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.665 18:13:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.665 18:13:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.665 18:13:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.665 18:13:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.665 18:13:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:59.665 18:13:17 thread -- scripts/common.sh@345 -- # : 1 00:05:59.665 18:13:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.665 18:13:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.665 18:13:17 thread -- scripts/common.sh@365 -- # decimal 1 00:05:59.665 18:13:17 thread -- scripts/common.sh@353 -- # local d=1 00:05:59.665 18:13:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.665 18:13:17 thread -- scripts/common.sh@355 -- # echo 1 00:05:59.665 18:13:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.665 18:13:17 thread -- scripts/common.sh@366 -- # decimal 2 00:05:59.665 18:13:17 thread -- scripts/common.sh@353 -- # local d=2 00:05:59.665 18:13:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.665 18:13:17 thread -- scripts/common.sh@355 -- # echo 2 00:05:59.665 18:13:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.665 18:13:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.665 18:13:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.665 18:13:17 thread -- scripts/common.sh@368 -- # return 0 00:05:59.665 18:13:17 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.665 18:13:17 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:59.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.665 --rc genhtml_branch_coverage=1 00:05:59.665 --rc genhtml_function_coverage=1 00:05:59.665 --rc genhtml_legend=1 00:05:59.665 --rc geninfo_all_blocks=1 00:05:59.665 --rc geninfo_unexecuted_blocks=1 00:05:59.665 00:05:59.665 ' 00:05:59.665 18:13:17 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:59.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.665 --rc genhtml_branch_coverage=1 00:05:59.665 --rc genhtml_function_coverage=1 00:05:59.665 --rc genhtml_legend=1 00:05:59.665 --rc geninfo_all_blocks=1 00:05:59.665 --rc geninfo_unexecuted_blocks=1 00:05:59.665 00:05:59.665 ' 00:05:59.665 18:13:17 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:59.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.665 --rc genhtml_branch_coverage=1 00:05:59.665 --rc genhtml_function_coverage=1 00:05:59.665 --rc genhtml_legend=1 00:05:59.665 --rc geninfo_all_blocks=1 00:05:59.665 --rc geninfo_unexecuted_blocks=1 00:05:59.665 00:05:59.665 ' 00:05:59.665 18:13:17 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:59.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.665 --rc genhtml_branch_coverage=1 00:05:59.665 --rc genhtml_function_coverage=1 00:05:59.665 --rc genhtml_legend=1 00:05:59.665 --rc geninfo_all_blocks=1 00:05:59.665 --rc geninfo_unexecuted_blocks=1 00:05:59.665 00:05:59.665 ' 00:05:59.665 18:13:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.665 18:13:17 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:59.665 18:13:17 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.665 18:13:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.665 ************************************ 00:05:59.665 START TEST thread_poller_perf 00:05:59.665 ************************************ 00:05:59.665 18:13:17 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.665 [2024-10-08 18:13:17.771099] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:05:59.665 [2024-10-08 18:13:17.771153] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133089 ] 00:05:59.666 [2024-10-08 18:13:17.829137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.666 [2024-10-08 18:13:17.937105] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.666 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:01.037 [2024-10-08T16:13:19.324Z] ====================================== 00:06:01.037 [2024-10-08T16:13:19.324Z] busy:2712882198 (cyc) 00:06:01.037 [2024-10-08T16:13:19.324Z] total_run_count: 369000 00:06:01.037 [2024-10-08T16:13:19.324Z] tsc_hz: 2700000000 (cyc) 00:06:01.037 [2024-10-08T16:13:19.324Z] ====================================== 00:06:01.037 [2024-10-08T16:13:19.324Z] poller_cost: 7351 (cyc), 2722 (nsec) 00:06:01.037 00:06:01.037 real 0m1.299s 00:06:01.037 user 0m1.218s 00:06:01.037 sys 0m0.076s 00:06:01.037 18:13:19 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.037 18:13:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.037 ************************************ 00:06:01.037 END TEST thread_poller_perf 00:06:01.037 ************************************ 00:06:01.037 18:13:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.037 18:13:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:01.037 18:13:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.037 18:13:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.037 ************************************ 00:06:01.037 START TEST thread_poller_perf 00:06:01.037 ************************************ 00:06:01.037 18:13:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:01.038 [2024-10-08 18:13:19.125006] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:06:01.038 [2024-10-08 18:13:19.125071] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133362 ] 00:06:01.038 [2024-10-08 18:13:19.182352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.038 [2024-10-08 18:13:19.285223] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.038 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:02.411 [2024-10-08T16:13:20.698Z] ====================================== 00:06:02.411 [2024-10-08T16:13:20.698Z] busy:2702071185 (cyc) 00:06:02.411 [2024-10-08T16:13:20.698Z] total_run_count: 4730000 00:06:02.411 [2024-10-08T16:13:20.698Z] tsc_hz: 2700000000 (cyc) 00:06:02.411 [2024-10-08T16:13:20.698Z] ====================================== 00:06:02.411 [2024-10-08T16:13:20.698Z] poller_cost: 571 (cyc), 211 (nsec) 00:06:02.411 00:06:02.411 real 0m1.292s 00:06:02.411 user 0m1.201s 00:06:02.411 sys 0m0.086s 00:06:02.411 18:13:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.411 18:13:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.411 ************************************ 00:06:02.411 END TEST thread_poller_perf 00:06:02.411 ************************************ 00:06:02.411 18:13:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:02.411 00:06:02.411 real 0m2.822s 00:06:02.411 user 0m2.545s 00:06:02.411 sys 0m0.281s 00:06:02.411 18:13:20 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.411 18:13:20 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.411 ************************************ 00:06:02.411 END TEST thread 00:06:02.411 ************************************ 00:06:02.411 18:13:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:02.411 18:13:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:02.411 18:13:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.411 18:13:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.411 18:13:20 -- common/autotest_common.sh@10 -- # set +x 00:06:02.411 ************************************ 00:06:02.411 START TEST app_cmdline 00:06:02.411 ************************************ 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:02.411 * Looking for test storage... 00:06:02.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.411 18:13:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.411 --rc genhtml_branch_coverage=1 00:06:02.411 --rc genhtml_function_coverage=1 00:06:02.411 --rc genhtml_legend=1 00:06:02.411 --rc geninfo_all_blocks=1 00:06:02.411 --rc geninfo_unexecuted_blocks=1 00:06:02.411 00:06:02.411 ' 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.411 --rc genhtml_branch_coverage=1 00:06:02.411 --rc genhtml_function_coverage=1 00:06:02.411 --rc genhtml_legend=1 00:06:02.411 --rc geninfo_all_blocks=1 00:06:02.411 --rc geninfo_unexecuted_blocks=1 00:06:02.411 00:06:02.411 ' 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.411 --rc genhtml_branch_coverage=1 00:06:02.411 --rc genhtml_function_coverage=1 00:06:02.411 --rc genhtml_legend=1 00:06:02.411 --rc geninfo_all_blocks=1 00:06:02.411 --rc geninfo_unexecuted_blocks=1 00:06:02.411 00:06:02.411 ' 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.411 --rc genhtml_branch_coverage=1 00:06:02.411 --rc genhtml_function_coverage=1 00:06:02.411 --rc genhtml_legend=1 00:06:02.411 --rc geninfo_all_blocks=1 00:06:02.411 --rc geninfo_unexecuted_blocks=1 00:06:02.411 00:06:02.411 ' 00:06:02.411 18:13:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:02.411 18:13:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1133563 00:06:02.411 18:13:20 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:02.411 18:13:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1133563 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1133563 ']' 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.411 18:13:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.411 [2024-10-08 18:13:20.687449] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:06:02.411 [2024-10-08 18:13:20.687532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133563 ] 00:06:02.670 [2024-10-08 18:13:20.748142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.670 [2024-10-08 18:13:20.858072] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.928 18:13:21 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.928 18:13:21 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:02.928 18:13:21 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:03.186 { 00:06:03.186 "version": "SPDK v25.01-pre git sha1 fdd35b2ab", 00:06:03.186 "fields": { 00:06:03.186 "major": 25, 00:06:03.186 "minor": 1, 00:06:03.186 "patch": 0, 00:06:03.186 "suffix": "-pre", 00:06:03.186 "commit": "fdd35b2ab" 00:06:03.186 } 00:06:03.186 } 00:06:03.186 18:13:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:03.186 18:13:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:03.186 18:13:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:03.186 18:13:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:03.186 18:13:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.186 18:13:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:03.186 18:13:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.186 18:13:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:03.186 18:13:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:03.186 18:13:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:03.186 18:13:21 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:03.445 request: 00:06:03.445 { 00:06:03.445 "method": "env_dpdk_get_mem_stats", 00:06:03.445 "req_id": 1 00:06:03.445 } 00:06:03.445 Got JSON-RPC error response 00:06:03.445 response: 00:06:03.445 { 00:06:03.445 "code": -32601, 00:06:03.445 "message": "Method not found" 00:06:03.445 } 00:06:03.445 18:13:21 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:03.445 18:13:21 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.445 18:13:21 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.445 18:13:21 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.445 18:13:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1133563 00:06:03.445 18:13:21 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1133563 ']' 00:06:03.445 18:13:21 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1133563 00:06:03.445 18:13:21 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:03.445 18:13:21 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.445 18:13:21 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1133563 00:06:03.702 18:13:21 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.702 18:13:21 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.702 18:13:21 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1133563' 00:06:03.702 killing process with pid 1133563 00:06:03.702 18:13:21 app_cmdline -- common/autotest_common.sh@969 -- # kill 1133563 00:06:03.702 18:13:21 app_cmdline -- common/autotest_common.sh@974 -- # wait 1133563 00:06:03.960 00:06:03.960 real 0m1.726s 00:06:03.961 user 0m2.078s 00:06:03.961 sys 0m0.514s 00:06:03.961 18:13:22 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.961 18:13:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.961 ************************************ 00:06:03.961 END TEST app_cmdline 00:06:03.961 ************************************ 00:06:03.961 18:13:22 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:03.961 18:13:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.961 18:13:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.961 18:13:22 -- common/autotest_common.sh@10 -- # set +x 00:06:04.219 ************************************ 00:06:04.219 START TEST version 00:06:04.219 ************************************ 00:06:04.219 18:13:22 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:04.219 * Looking for test storage... 00:06:04.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:04.219 18:13:22 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.219 18:13:22 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.219 18:13:22 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.219 18:13:22 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.219 18:13:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.219 18:13:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.220 18:13:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.220 18:13:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.220 18:13:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.220 18:13:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.220 18:13:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.220 18:13:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.220 18:13:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.220 18:13:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.220 18:13:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.220 18:13:22 version -- scripts/common.sh@344 -- # case "$op" in 00:06:04.220 18:13:22 version -- scripts/common.sh@345 -- # : 1 00:06:04.220 18:13:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.220 18:13:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.220 18:13:22 version -- scripts/common.sh@365 -- # decimal 1 00:06:04.220 18:13:22 version -- scripts/common.sh@353 -- # local d=1 00:06:04.220 18:13:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.220 18:13:22 version -- scripts/common.sh@355 -- # echo 1 00:06:04.220 18:13:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.220 18:13:22 version -- scripts/common.sh@366 -- # decimal 2 00:06:04.220 18:13:22 version -- scripts/common.sh@353 -- # local d=2 00:06:04.220 18:13:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.220 18:13:22 version -- scripts/common.sh@355 -- # echo 2 00:06:04.220 18:13:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.220 18:13:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.220 18:13:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.220 18:13:22 version -- scripts/common.sh@368 -- # return 0 00:06:04.220 18:13:22 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.220 18:13:22 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.220 --rc genhtml_branch_coverage=1 00:06:04.220 --rc genhtml_function_coverage=1 00:06:04.220 --rc genhtml_legend=1 00:06:04.220 --rc geninfo_all_blocks=1 00:06:04.220 --rc geninfo_unexecuted_blocks=1 00:06:04.220 00:06:04.220 ' 00:06:04.220 18:13:22 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.220 --rc genhtml_branch_coverage=1 00:06:04.220 --rc genhtml_function_coverage=1 00:06:04.220 --rc genhtml_legend=1 00:06:04.220 --rc geninfo_all_blocks=1 00:06:04.220 --rc geninfo_unexecuted_blocks=1 00:06:04.220 00:06:04.220 ' 00:06:04.220 18:13:22 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.220 --rc genhtml_branch_coverage=1 00:06:04.220 --rc genhtml_function_coverage=1 00:06:04.220 --rc genhtml_legend=1 00:06:04.220 --rc geninfo_all_blocks=1 00:06:04.220 --rc geninfo_unexecuted_blocks=1 00:06:04.220 00:06:04.220 ' 00:06:04.220 18:13:22 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.220 --rc genhtml_branch_coverage=1 00:06:04.220 --rc genhtml_function_coverage=1 00:06:04.220 --rc genhtml_legend=1 00:06:04.220 --rc geninfo_all_blocks=1 00:06:04.220 --rc geninfo_unexecuted_blocks=1 00:06:04.220 00:06:04.220 ' 00:06:04.220 18:13:22 version -- app/version.sh@17 -- # get_header_version major 00:06:04.220 18:13:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.220 18:13:22 version -- app/version.sh@14 -- # cut -f2 00:06:04.220 18:13:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.220 18:13:22 version -- app/version.sh@17 -- # major=25 00:06:04.220 18:13:22 version -- app/version.sh@18 -- # get_header_version minor 00:06:04.220 18:13:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.220 18:13:22 version -- app/version.sh@14 -- # cut -f2 00:06:04.220 18:13:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.220 18:13:22 version -- app/version.sh@18 -- # minor=1 00:06:04.220 18:13:22 version -- app/version.sh@19 -- # get_header_version patch 00:06:04.220 18:13:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.220 18:13:22 version -- app/version.sh@14 -- # cut -f2 00:06:04.220 18:13:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.220 18:13:22 version -- app/version.sh@19 -- # patch=0 00:06:04.220 18:13:22 version -- app/version.sh@20 -- # get_header_version suffix 00:06:04.220 18:13:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:04.220 18:13:22 version -- app/version.sh@14 -- # cut -f2 00:06:04.220 18:13:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.220 18:13:22 version -- app/version.sh@20 -- # suffix=-pre 00:06:04.220 18:13:22 version -- app/version.sh@22 -- # version=25.1 00:06:04.220 18:13:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:04.220 18:13:22 version -- app/version.sh@28 -- # version=25.1rc0 00:06:04.220 18:13:22 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:04.220 18:13:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:04.220 18:13:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:04.220 18:13:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:04.220 00:06:04.220 real 0m0.203s 00:06:04.220 user 0m0.124s 00:06:04.220 sys 0m0.105s 00:06:04.220 18:13:22 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.220 18:13:22 version -- common/autotest_common.sh@10 -- # set +x 00:06:04.220 ************************************ 00:06:04.220 END TEST version 00:06:04.220 ************************************ 00:06:04.220 18:13:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:04.220 18:13:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:04.220 18:13:22 -- spdk/autotest.sh@194 -- # uname -s 00:06:04.220 18:13:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:04.220 18:13:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:04.220 18:13:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:04.220 18:13:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:04.220 18:13:22 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:04.220 18:13:22 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:04.220 18:13:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.220 18:13:22 -- common/autotest_common.sh@10 -- # set +x 00:06:04.220 18:13:22 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:04.220 18:13:22 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:04.220 18:13:22 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:04.220 18:13:22 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:04.220 18:13:22 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:04.220 18:13:22 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:04.220 18:13:22 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:04.220 18:13:22 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:04.220 18:13:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.220 18:13:22 -- common/autotest_common.sh@10 -- # set +x 00:06:04.479 ************************************ 00:06:04.479 START TEST nvmf_tcp 00:06:04.479 ************************************ 00:06:04.479 18:13:22 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:04.479 * Looking for test storage... 00:06:04.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:04.479 18:13:22 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.479 18:13:22 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.479 18:13:22 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.479 18:13:22 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:04.479 18:13:22 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.480 18:13:22 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:04.480 18:13:22 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:04.480 18:13:22 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.480 18:13:22 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:04.480 18:13:22 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.480 18:13:22 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.480 18:13:22 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.480 18:13:22 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:04.480 18:13:22 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.480 18:13:22 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.480 --rc genhtml_branch_coverage=1 00:06:04.480 --rc genhtml_function_coverage=1 00:06:04.480 --rc genhtml_legend=1 00:06:04.480 --rc geninfo_all_blocks=1 00:06:04.480 --rc geninfo_unexecuted_blocks=1 00:06:04.480 00:06:04.480 ' 00:06:04.480 18:13:22 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.480 --rc genhtml_branch_coverage=1 00:06:04.480 --rc genhtml_function_coverage=1 00:06:04.480 --rc genhtml_legend=1 00:06:04.480 --rc geninfo_all_blocks=1 00:06:04.480 --rc geninfo_unexecuted_blocks=1 00:06:04.480 00:06:04.480 ' 00:06:04.480 18:13:22 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.480 --rc genhtml_branch_coverage=1 00:06:04.480 --rc genhtml_function_coverage=1 00:06:04.480 --rc genhtml_legend=1 00:06:04.480 --rc geninfo_all_blocks=1 00:06:04.480 --rc geninfo_unexecuted_blocks=1 00:06:04.480 00:06:04.480 ' 00:06:04.480 18:13:22 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.480 --rc genhtml_branch_coverage=1 00:06:04.480 --rc genhtml_function_coverage=1 00:06:04.480 --rc genhtml_legend=1 00:06:04.480 --rc geninfo_all_blocks=1 00:06:04.480 --rc geninfo_unexecuted_blocks=1 00:06:04.480 00:06:04.480 ' 00:06:04.480 18:13:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:04.480 18:13:22 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:04.480 18:13:22 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:04.480 18:13:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:04.480 18:13:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.480 18:13:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:04.480 ************************************ 00:06:04.480 START TEST nvmf_target_core 00:06:04.480 ************************************ 00:06:04.480 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:04.480 * Looking for test storage... 00:06:04.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:04.480 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.480 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.480 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.740 --rc genhtml_branch_coverage=1 00:06:04.740 --rc genhtml_function_coverage=1 00:06:04.740 --rc genhtml_legend=1 00:06:04.740 --rc geninfo_all_blocks=1 00:06:04.740 --rc geninfo_unexecuted_blocks=1 00:06:04.740 00:06:04.740 ' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.740 --rc genhtml_branch_coverage=1 00:06:04.740 --rc genhtml_function_coverage=1 00:06:04.740 --rc genhtml_legend=1 00:06:04.740 --rc geninfo_all_blocks=1 00:06:04.740 --rc geninfo_unexecuted_blocks=1 00:06:04.740 00:06:04.740 ' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.740 --rc genhtml_branch_coverage=1 00:06:04.740 --rc genhtml_function_coverage=1 00:06:04.740 --rc genhtml_legend=1 00:06:04.740 --rc geninfo_all_blocks=1 00:06:04.740 --rc geninfo_unexecuted_blocks=1 00:06:04.740 00:06:04.740 ' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.740 --rc genhtml_branch_coverage=1 00:06:04.740 --rc genhtml_function_coverage=1 00:06:04.740 --rc genhtml_legend=1 00:06:04.740 --rc geninfo_all_blocks=1 00:06:04.740 --rc geninfo_unexecuted_blocks=1 00:06:04.740 00:06:04.740 ' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:04.740 ************************************ 00:06:04.740 START TEST nvmf_abort 00:06:04.740 ************************************ 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:04.740 * Looking for test storage... 00:06:04.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.740 18:13:22 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.740 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.740 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.740 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.740 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.741 --rc genhtml_branch_coverage=1 00:06:04.741 --rc genhtml_function_coverage=1 00:06:04.741 --rc genhtml_legend=1 00:06:04.741 --rc geninfo_all_blocks=1 00:06:04.741 --rc geninfo_unexecuted_blocks=1 00:06:04.741 00:06:04.741 ' 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.741 --rc genhtml_branch_coverage=1 00:06:04.741 --rc genhtml_function_coverage=1 00:06:04.741 --rc genhtml_legend=1 00:06:04.741 --rc geninfo_all_blocks=1 00:06:04.741 --rc geninfo_unexecuted_blocks=1 00:06:04.741 00:06:04.741 ' 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.741 --rc genhtml_branch_coverage=1 00:06:04.741 --rc genhtml_function_coverage=1 00:06:04.741 --rc genhtml_legend=1 00:06:04.741 --rc geninfo_all_blocks=1 00:06:04.741 --rc geninfo_unexecuted_blocks=1 00:06:04.741 00:06:04.741 ' 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.741 --rc genhtml_branch_coverage=1 00:06:04.741 --rc genhtml_function_coverage=1 00:06:04.741 --rc genhtml_legend=1 00:06:04.741 --rc geninfo_all_blocks=1 00:06:04.741 --rc geninfo_unexecuted_blocks=1 00:06:04.741 00:06:04.741 ' 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:04.741 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:05.000 18:13:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:07.531 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:07.531 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:07.531 Found net devices under 0000:09:00.0: cvl_0_0 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.531 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:07.532 Found net devices under 0000:09:00.1: cvl_0_1 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:07.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:07.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:06:07.532 00:06:07.532 --- 10.0.0.2 ping statistics --- 00:06:07.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.532 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:07.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:07.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:06:07.532 00:06:07.532 --- 10.0.0.1 ping statistics --- 00:06:07.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.532 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1135659 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1135659 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1135659 ']' 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.532 [2024-10-08 18:13:25.408539] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:06:07.532 [2024-10-08 18:13:25.408625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.532 [2024-10-08 18:13:25.471232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.532 [2024-10-08 18:13:25.573136] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:07.532 [2024-10-08 18:13:25.573197] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:07.532 [2024-10-08 18:13:25.573225] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:07.532 [2024-10-08 18:13:25.573238] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:07.532 [2024-10-08 18:13:25.573247] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:07.532 [2024-10-08 18:13:25.574141] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.532 [2024-10-08 18:13:25.574248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.532 [2024-10-08 18:13:25.574258] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.532 [2024-10-08 18:13:25.722036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.532 Malloc0 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.532 Delay0 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.532 [2024-10-08 18:13:25.794620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.532 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:07.533 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.533 18:13:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:07.791 [2024-10-08 18:13:25.859362] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:09.689 Initializing NVMe Controllers 00:06:09.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:09.689 controller IO queue size 128 less than required 00:06:09.689 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:09.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:09.689 Initialization complete. Launching workers. 00:06:09.689 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28808 00:06:09.689 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28869, failed to submit 62 00:06:09.689 success 28812, unsuccessful 57, failed 0 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:09.689 rmmod nvme_tcp 00:06:09.689 rmmod nvme_fabrics 00:06:09.689 rmmod nvme_keyring 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1135659 ']' 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1135659 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1135659 ']' 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1135659 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.689 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1135659 00:06:09.947 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:09.947 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:09.947 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1135659' 00:06:09.947 killing process with pid 1135659 00:06:09.947 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1135659 00:06:09.947 18:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1135659 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.208 18:13:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.117 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:12.117 00:06:12.117 real 0m7.449s 00:06:12.117 user 0m10.474s 00:06:12.117 sys 0m2.532s 00:06:12.117 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.117 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.117 ************************************ 00:06:12.117 END TEST nvmf_abort 00:06:12.117 ************************************ 00:06:12.117 18:13:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:12.117 18:13:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:12.117 18:13:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.117 18:13:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:12.117 ************************************ 00:06:12.117 START TEST nvmf_ns_hotplug_stress 00:06:12.117 ************************************ 00:06:12.117 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:12.376 * Looking for test storage... 00:06:12.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:12.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.376 --rc genhtml_branch_coverage=1 00:06:12.376 --rc genhtml_function_coverage=1 00:06:12.376 --rc genhtml_legend=1 00:06:12.376 --rc geninfo_all_blocks=1 00:06:12.376 --rc geninfo_unexecuted_blocks=1 00:06:12.376 00:06:12.376 ' 00:06:12.376 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:12.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.376 --rc genhtml_branch_coverage=1 00:06:12.376 --rc genhtml_function_coverage=1 00:06:12.376 --rc genhtml_legend=1 00:06:12.377 --rc geninfo_all_blocks=1 00:06:12.377 --rc geninfo_unexecuted_blocks=1 00:06:12.377 00:06:12.377 ' 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:12.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.377 --rc genhtml_branch_coverage=1 00:06:12.377 --rc genhtml_function_coverage=1 00:06:12.377 --rc genhtml_legend=1 00:06:12.377 --rc geninfo_all_blocks=1 00:06:12.377 --rc geninfo_unexecuted_blocks=1 00:06:12.377 00:06:12.377 ' 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:12.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.377 --rc genhtml_branch_coverage=1 00:06:12.377 --rc genhtml_function_coverage=1 00:06:12.377 --rc genhtml_legend=1 00:06:12.377 --rc geninfo_all_blocks=1 00:06:12.377 --rc geninfo_unexecuted_blocks=1 00:06:12.377 00:06:12.377 ' 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:12.377 18:13:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:14.913 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:14.913 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:14.913 Found net devices under 0000:09:00.0: cvl_0_0 00:06:14.913 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:14.914 Found net devices under 0000:09:00.1: cvl_0_1 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:14.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:06:14.914 00:06:14.914 --- 10.0.0.2 ping statistics --- 00:06:14.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.914 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:06:14.914 00:06:14.914 --- 10.0.0.1 ping statistics --- 00:06:14.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.914 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1138017 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1138017 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1138017 ']' 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.914 18:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.914 [2024-10-08 18:13:32.979424] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:06:14.914 [2024-10-08 18:13:32.979515] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.914 [2024-10-08 18:13:33.042224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.914 [2024-10-08 18:13:33.150599] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.914 [2024-10-08 18:13:33.150663] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.914 [2024-10-08 18:13:33.150691] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.914 [2024-10-08 18:13:33.150702] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.914 [2024-10-08 18:13:33.150712] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.914 [2024-10-08 18:13:33.151588] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.914 [2024-10-08 18:13:33.151653] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.914 [2024-10-08 18:13:33.151656] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.173 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.173 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:15.173 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:15.173 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.173 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.173 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.173 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:15.173 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:15.430 [2024-10-08 18:13:33.542919] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.430 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.688 18:13:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.946 [2024-10-08 18:13:34.115252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.946 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:16.203 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:16.461 Malloc0 00:06:16.461 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:16.719 Delay0 00:06:16.719 18:13:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.976 18:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:17.234 NULL1 00:06:17.234 18:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:17.492 18:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1138323 00:06:17.492 18:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:17.492 18:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:17.492 18:13:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.750 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.008 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:18.008 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:18.265 true 00:06:18.265 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:18.265 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.829 18:13:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.829 18:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:18.829 18:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:19.086 true 00:06:19.086 18:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:19.086 18:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.651 18:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.651 18:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:19.651 18:13:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:19.911 true 00:06:19.911 18:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:19.911 18:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.285 Read completed with error (sct=0, sc=11) 00:06:21.285 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.285 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:21.285 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:21.542 true 00:06:21.542 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:21.542 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.800 18:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.058 18:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:22.058 18:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:22.316 true 00:06:22.316 18:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:22.316 18:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.574 18:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.831 18:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:22.831 18:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:23.088 true 00:06:23.088 18:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:23.088 18:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.021 18:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.311 18:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:24.311 18:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:24.594 true 00:06:24.594 18:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:24.594 18:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.852 18:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.109 18:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:25.109 18:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:25.367 true 00:06:25.367 18:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:25.367 18:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.625 18:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.883 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:25.883 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:26.140 true 00:06:26.140 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:26.140 18:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.516 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.516 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.516 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:27.516 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:27.774 true 00:06:27.774 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:27.774 18:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.032 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.289 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:28.289 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:28.548 true 00:06:28.548 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:28.548 18:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.481 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.481 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:29.481 18:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:29.739 true 00:06:29.739 18:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:29.739 18:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.302 18:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.302 18:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:30.302 18:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:30.559 true 00:06:30.559 18:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:30.559 18:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.491 18:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.749 18:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:31.749 18:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:32.006 true 00:06:32.007 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:32.007 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.263 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.828 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:32.828 18:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:32.828 true 00:06:32.828 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:32.828 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.086 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.344 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:33.344 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:33.601 true 00:06:33.601 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:33.601 18:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.972 18:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.972 18:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:34.972 18:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:35.230 true 00:06:35.230 18:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:35.230 18:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.487 18:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.745 18:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:35.745 18:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:36.003 true 00:06:36.003 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:36.003 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.261 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.518 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:36.518 18:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:36.776 true 00:06:36.776 18:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:36.776 18:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.708 18:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.223 18:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:38.223 18:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:38.480 true 00:06:38.480 18:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:38.480 18:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.738 18:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.996 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:38.996 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:39.253 true 00:06:39.253 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:39.253 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.510 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.767 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:39.767 18:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:40.024 true 00:06:40.024 18:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:40.024 18:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.957 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.214 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:41.214 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:41.471 true 00:06:41.471 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:41.471 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.729 18:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.987 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:41.987 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:42.245 true 00:06:42.245 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:42.245 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.503 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.761 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:42.761 18:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:43.019 true 00:06:43.019 18:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:43.019 18:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.952 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.209 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:44.209 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:44.467 true 00:06:44.467 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:44.467 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.724 18:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.982 18:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:44.982 18:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:45.239 true 00:06:45.496 18:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:45.496 18:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.060 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.317 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:46.317 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:46.575 true 00:06:46.575 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:46.575 18:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.833 18:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.090 18:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:47.090 18:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:47.349 true 00:06:47.349 18:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:47.349 18:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.282 Initializing NVMe Controllers 00:06:48.282 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:48.282 Controller IO queue size 128, less than required. 00:06:48.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.282 Controller IO queue size 128, less than required. 00:06:48.282 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:48.282 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:48.282 Initialization complete. Launching workers. 00:06:48.282 ======================================================== 00:06:48.282 Latency(us) 00:06:48.282 Device Information : IOPS MiB/s Average min max 00:06:48.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 402.60 0.20 128010.41 3470.95 1020212.02 00:06:48.282 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8167.73 3.99 15624.38 3393.14 480825.30 00:06:48.282 ======================================================== 00:06:48.282 Total : 8570.33 4.18 20903.82 3393.14 1020212.02 00:06:48.282 00:06:48.282 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.540 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:48.540 18:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:48.798 true 00:06:48.798 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1138323 00:06:48.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1138323) - No such process 00:06:48.798 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1138323 00:06:48.798 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.055 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.313 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:49.313 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:49.313 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:49.313 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.313 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:49.571 null0 00:06:49.571 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.571 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.571 18:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:49.829 null1 00:06:49.829 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.829 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.829 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:50.086 null2 00:06:50.343 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.343 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.343 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:50.601 null3 00:06:50.601 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.601 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.601 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:50.859 null4 00:06:50.859 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.859 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.859 18:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:51.116 null5 00:06:51.116 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.116 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.116 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:51.374 null6 00:06:51.374 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.374 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.374 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:51.632 null7 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.632 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1142521 1142522 1142524 1142526 1142528 1142530 1142532 1142534 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.633 18:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.891 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.891 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.891 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.891 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.891 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.891 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.891 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.891 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.149 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.407 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.407 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.407 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.407 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.407 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.407 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.407 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.407 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.665 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.923 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.923 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.923 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.923 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.923 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.923 18:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.181 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.181 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.181 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.181 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.181 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.181 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.181 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.181 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.470 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.753 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.753 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.753 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.754 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.754 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.754 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.754 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.754 18:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.012 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.270 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.271 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.271 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.271 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.271 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.271 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.271 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.271 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.528 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.529 18:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.787 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.787 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.787 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.787 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.787 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.787 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.787 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.787 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.044 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.044 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.044 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.303 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.561 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.561 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.561 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.561 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.561 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.561 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.561 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:55.561 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.820 18:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.078 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.078 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.078 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.078 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.078 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.078 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.078 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.078 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.336 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.337 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.337 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.337 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.337 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.595 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.595 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.595 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.595 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.595 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.595 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.595 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.595 18:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.852 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.852 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.852 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.852 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.852 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.852 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.110 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.368 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.368 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.368 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.368 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.368 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.368 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.368 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.368 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.626 rmmod nvme_tcp 00:06:57.626 rmmod nvme_fabrics 00:06:57.626 rmmod nvme_keyring 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1138017 ']' 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1138017 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1138017 ']' 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1138017 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1138017 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1138017' 00:06:57.626 killing process with pid 1138017 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1138017 00:06:57.626 18:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1138017 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.884 18:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.424 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:00.424 00:07:00.424 real 0m47.788s 00:07:00.424 user 3m42.014s 00:07:00.424 sys 0m16.133s 00:07:00.424 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.424 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:00.424 ************************************ 00:07:00.424 END TEST nvmf_ns_hotplug_stress 00:07:00.424 ************************************ 00:07:00.424 18:14:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:00.424 18:14:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:00.424 18:14:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.424 18:14:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.424 ************************************ 00:07:00.424 START TEST nvmf_delete_subsystem 00:07:00.424 ************************************ 00:07:00.424 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:00.424 * Looking for test storage... 00:07:00.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.424 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.424 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.425 --rc genhtml_branch_coverage=1 00:07:00.425 --rc genhtml_function_coverage=1 00:07:00.425 --rc genhtml_legend=1 00:07:00.425 --rc geninfo_all_blocks=1 00:07:00.425 --rc geninfo_unexecuted_blocks=1 00:07:00.425 00:07:00.425 ' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.425 --rc genhtml_branch_coverage=1 00:07:00.425 --rc genhtml_function_coverage=1 00:07:00.425 --rc genhtml_legend=1 00:07:00.425 --rc geninfo_all_blocks=1 00:07:00.425 --rc geninfo_unexecuted_blocks=1 00:07:00.425 00:07:00.425 ' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.425 --rc genhtml_branch_coverage=1 00:07:00.425 --rc genhtml_function_coverage=1 00:07:00.425 --rc genhtml_legend=1 00:07:00.425 --rc geninfo_all_blocks=1 00:07:00.425 --rc geninfo_unexecuted_blocks=1 00:07:00.425 00:07:00.425 ' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.425 --rc genhtml_branch_coverage=1 00:07:00.425 --rc genhtml_function_coverage=1 00:07:00.425 --rc genhtml_legend=1 00:07:00.425 --rc geninfo_all_blocks=1 00:07:00.425 --rc geninfo_unexecuted_blocks=1 00:07:00.425 00:07:00.425 ' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:00.425 18:14:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.357 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.357 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:02.357 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:02.357 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:02.358 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:02.358 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:02.358 Found net devices under 0000:09:00.0: cvl_0_0 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:02.358 Found net devices under 0000:09:00.1: cvl_0_1 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.358 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:02.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:07:02.617 00:07:02.617 --- 10.0.0.2 ping statistics --- 00:07:02.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.617 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:07:02.617 00:07:02.617 --- 10.0.0.1 ping statistics --- 00:07:02.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.617 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1145425 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1145425 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1145425 ']' 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.617 18:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.617 [2024-10-08 18:14:20.809264] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:07:02.617 [2024-10-08 18:14:20.809361] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.617 [2024-10-08 18:14:20.873527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:02.876 [2024-10-08 18:14:20.979232] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.876 [2024-10-08 18:14:20.979296] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.876 [2024-10-08 18:14:20.979327] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.876 [2024-10-08 18:14:20.979338] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.876 [2024-10-08 18:14:20.979347] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.876 [2024-10-08 18:14:20.980088] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.876 [2024-10-08 18:14:20.980094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.876 [2024-10-08 18:14:21.128939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.876 [2024-10-08 18:14:21.145171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.876 NULL1 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.876 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.134 Delay0 00:07:03.134 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.134 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.134 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.134 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.134 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.134 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1145453 00:07:03.134 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:03.134 18:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:03.134 [2024-10-08 18:14:21.219995] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:05.032 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.032 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.032 18:14:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 [2024-10-08 18:14:23.472114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99b400d320 is same with the state(6) to be set 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 starting I/O failed: -6 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.290 Read completed with error (sct=0, sc=8) 00:07:05.290 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 starting I/O failed: -6 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 starting I/O failed: -6 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 starting I/O failed: -6 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 starting I/O failed: -6 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 [2024-10-08 18:14:23.473061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2d390 is same with the state(6) to be set 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Read completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:05.291 Write completed with error (sct=0, sc=8) 00:07:06.225 [2024-10-08 18:14:24.441137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2ea70 is same with the state(6) to be set 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 [2024-10-08 18:14:24.473040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2d930 is same with the state(6) to be set 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 [2024-10-08 18:14:24.473414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2d570 is same with the state(6) to be set 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 [2024-10-08 18:14:24.473574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99b400cff0 is same with the state(6) to be set 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Read completed with error (sct=0, sc=8) 00:07:06.225 Write completed with error (sct=0, sc=8) 00:07:06.225 [2024-10-08 18:14:24.474448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99b400d650 is same with the state(6) to be set 00:07:06.225 Initializing NVMe Controllers 00:07:06.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.225 Controller IO queue size 128, less than required. 00:07:06.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:06.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:06.225 Initialization complete. Launching workers. 00:07:06.225 ======================================================== 00:07:06.225 Latency(us) 00:07:06.225 Device Information : IOPS MiB/s Average min max 00:07:06.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.20 0.08 904328.67 425.15 1013113.17 00:07:06.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.74 0.08 916431.48 692.64 1012322.71 00:07:06.225 ======================================================== 00:07:06.225 Total : 326.94 0.16 910279.07 425.15 1013113.17 00:07:06.225 00:07:06.225 [2024-10-08 18:14:24.474906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b2ea70 (9): Bad file descriptor 00:07:06.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:06.225 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.225 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:06.225 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1145453 00:07:06.225 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:06.791 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:06.791 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1145453 00:07:06.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1145453) - No such process 00:07:06.791 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1145453 00:07:06.791 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1145453 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1145453 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.792 18:14:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.792 [2024-10-08 18:14:24.996318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.792 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.792 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.792 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.792 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.792 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.792 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1145861 00:07:06.792 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:06.792 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1145861 00:07:06.792 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.792 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:06.792 [2024-10-08 18:14:25.054428] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:07.357 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.357 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1145861 00:07:07.357 18:14:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.923 18:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.923 18:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1145861 00:07:07.923 18:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.488 18:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.488 18:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1145861 00:07:08.488 18:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.745 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.745 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1145861 00:07:08.745 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.310 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.310 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1145861 00:07:09.310 18:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.875 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.875 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1145861 00:07:09.875 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.875 Initializing NVMe Controllers 00:07:09.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.875 Controller IO queue size 128, less than required. 00:07:09.875 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:09.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:09.875 Initialization complete. Launching workers. 00:07:09.875 ======================================================== 00:07:09.875 Latency(us) 00:07:09.875 Device Information : IOPS MiB/s Average min max 00:07:09.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003567.94 1000156.17 1011659.44 00:07:09.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005230.25 1000166.46 1011569.07 00:07:09.875 ======================================================== 00:07:09.875 Total : 256.00 0.12 1004399.09 1000156.17 1011659.44 00:07:09.875 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1145861 00:07:10.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1145861) - No such process 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1145861 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.440 rmmod nvme_tcp 00:07:10.440 rmmod nvme_fabrics 00:07:10.440 rmmod nvme_keyring 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1145425 ']' 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1145425 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1145425 ']' 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1145425 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1145425 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1145425' 00:07:10.440 killing process with pid 1145425 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1145425 00:07:10.440 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1145425 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.699 18:14:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.236 18:14:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:13.237 00:07:13.237 real 0m12.696s 00:07:13.237 user 0m28.160s 00:07:13.237 sys 0m3.107s 00:07:13.237 18:14:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.237 18:14:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.237 ************************************ 00:07:13.237 END TEST nvmf_delete_subsystem 00:07:13.237 ************************************ 00:07:13.237 18:14:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:13.237 18:14:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.237 18:14:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.237 18:14:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.237 ************************************ 00:07:13.237 START TEST nvmf_host_management 00:07:13.237 ************************************ 00:07:13.237 18:14:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:13.237 * Looking for test storage... 00:07:13.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:13.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.237 --rc genhtml_branch_coverage=1 00:07:13.237 --rc genhtml_function_coverage=1 00:07:13.237 --rc genhtml_legend=1 00:07:13.237 --rc geninfo_all_blocks=1 00:07:13.237 --rc geninfo_unexecuted_blocks=1 00:07:13.237 00:07:13.237 ' 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:13.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.237 --rc genhtml_branch_coverage=1 00:07:13.237 --rc genhtml_function_coverage=1 00:07:13.237 --rc genhtml_legend=1 00:07:13.237 --rc geninfo_all_blocks=1 00:07:13.237 --rc geninfo_unexecuted_blocks=1 00:07:13.237 00:07:13.237 ' 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:13.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.237 --rc genhtml_branch_coverage=1 00:07:13.237 --rc genhtml_function_coverage=1 00:07:13.237 --rc genhtml_legend=1 00:07:13.237 --rc geninfo_all_blocks=1 00:07:13.237 --rc geninfo_unexecuted_blocks=1 00:07:13.237 00:07:13.237 ' 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:13.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.237 --rc genhtml_branch_coverage=1 00:07:13.237 --rc genhtml_function_coverage=1 00:07:13.237 --rc genhtml_legend=1 00:07:13.237 --rc geninfo_all_blocks=1 00:07:13.237 --rc geninfo_unexecuted_blocks=1 00:07:13.237 00:07:13.237 ' 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.237 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:13.238 18:14:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:15.139 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:15.139 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:15.139 Found net devices under 0000:09:00.0: cvl_0_0 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:15.139 Found net devices under 0000:09:00.1: cvl_0_1 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.139 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:15.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:07:15.140 00:07:15.140 --- 10.0.0.2 ping statistics --- 00:07:15.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.140 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:15.140 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:07:15.398 00:07:15.398 --- 10.0.0.1 ping statistics --- 00:07:15.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.398 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1148339 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1148339 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1148339 ']' 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.398 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.398 [2024-10-08 18:14:33.516859] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:07:15.398 [2024-10-08 18:14:33.516948] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.398 [2024-10-08 18:14:33.578442] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.398 [2024-10-08 18:14:33.678807] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.398 [2024-10-08 18:14:33.678865] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.398 [2024-10-08 18:14:33.678893] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.398 [2024-10-08 18:14:33.678905] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.398 [2024-10-08 18:14:33.678914] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.398 [2024-10-08 18:14:33.680595] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.398 [2024-10-08 18:14:33.680776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.398 [2024-10-08 18:14:33.680840] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.398 [2024-10-08 18:14:33.680843] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.657 [2024-10-08 18:14:33.841746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.657 Malloc0 00:07:15.657 [2024-10-08 18:14:33.907137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.657 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1148381 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1148381 /var/tmp/bdevperf.sock 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1148381 ']' 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:15.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:15.915 { 00:07:15.915 "params": { 00:07:15.915 "name": "Nvme$subsystem", 00:07:15.915 "trtype": "$TEST_TRANSPORT", 00:07:15.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:15.915 "adrfam": "ipv4", 00:07:15.915 "trsvcid": "$NVMF_PORT", 00:07:15.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:15.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:15.915 "hdgst": ${hdgst:-false}, 00:07:15.915 "ddgst": ${ddgst:-false} 00:07:15.915 }, 00:07:15.915 "method": "bdev_nvme_attach_controller" 00:07:15.915 } 00:07:15.915 EOF 00:07:15.915 )") 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:15.915 18:14:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:15.915 "params": { 00:07:15.915 "name": "Nvme0", 00:07:15.915 "trtype": "tcp", 00:07:15.915 "traddr": "10.0.0.2", 00:07:15.915 "adrfam": "ipv4", 00:07:15.915 "trsvcid": "4420", 00:07:15.915 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:15.915 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:15.915 "hdgst": false, 00:07:15.915 "ddgst": false 00:07:15.915 }, 00:07:15.915 "method": "bdev_nvme_attach_controller" 00:07:15.915 }' 00:07:15.915 [2024-10-08 18:14:33.991089] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:07:15.915 [2024-10-08 18:14:33.991165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148381 ] 00:07:15.915 [2024-10-08 18:14:34.052513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.915 [2024-10-08 18:14:34.165560] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.174 Running I/O for 10 seconds... 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:16.174 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:16.431 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:16.431 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:16.431 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:16.431 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.431 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:16.431 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.431 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.689 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:16.689 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:16.689 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:16.689 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:16.689 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:16.689 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:16.689 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.689 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.689 [2024-10-08 18:14:34.746431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.746982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.746996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.747011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.747024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.747039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.747053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.747067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.747081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.747095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.747109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.747124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.747153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.747168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.747182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.747197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.747210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.747225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.747238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.747257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.689 [2024-10-08 18:14:34.747292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.689 [2024-10-08 18:14:34.747312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.747975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.747990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.690 [2024-10-08 18:14:34.748432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:16.690 [2024-10-08 18:14:34.748516] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10c95a0 was disconnected and freed. reset controller. 00:07:16.690 [2024-10-08 18:14:34.749696] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:16.690 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.690 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:16.690 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.690 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:16.690 task offset: 82816 on job bdev=Nvme0n1 fails 00:07:16.690 00:07:16.690 Latency(us) 00:07:16.690 [2024-10-08T16:14:34.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.691 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:16.691 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:16.691 Verification LBA range: start 0x0 length 0x400 00:07:16.691 Nvme0n1 : 0.40 1619.11 101.19 161.91 0.00 34878.41 2645.71 34564.17 00:07:16.691 [2024-10-08T16:14:34.978Z] =================================================================================================================== 00:07:16.691 [2024-10-08T16:14:34.978Z] Total : 1619.11 101.19 161.91 0.00 34878.41 2645.71 34564.17 00:07:16.691 [2024-10-08 18:14:34.751608] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.691 [2024-10-08 18:14:34.751637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb0b00 (9): Bad file descriptor 00:07:16.691 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.691 18:14:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:16.691 [2024-10-08 18:14:34.925441] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1148381 00:07:17.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1148381) - No such process 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:17.624 { 00:07:17.624 "params": { 00:07:17.624 "name": "Nvme$subsystem", 00:07:17.624 "trtype": "$TEST_TRANSPORT", 00:07:17.624 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.624 "adrfam": "ipv4", 00:07:17.624 "trsvcid": "$NVMF_PORT", 00:07:17.624 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.624 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.624 "hdgst": ${hdgst:-false}, 00:07:17.624 "ddgst": ${ddgst:-false} 00:07:17.624 }, 00:07:17.624 "method": "bdev_nvme_attach_controller" 00:07:17.624 } 00:07:17.624 EOF 00:07:17.624 )") 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:17.624 18:14:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:17.624 "params": { 00:07:17.624 "name": "Nvme0", 00:07:17.624 "trtype": "tcp", 00:07:17.624 "traddr": "10.0.0.2", 00:07:17.624 "adrfam": "ipv4", 00:07:17.624 "trsvcid": "4420", 00:07:17.624 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:17.624 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:17.624 "hdgst": false, 00:07:17.624 "ddgst": false 00:07:17.624 }, 00:07:17.624 "method": "bdev_nvme_attach_controller" 00:07:17.624 }' 00:07:17.624 [2024-10-08 18:14:35.809901] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:07:17.624 [2024-10-08 18:14:35.809976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148663 ] 00:07:17.624 [2024-10-08 18:14:35.868878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.882 [2024-10-08 18:14:35.983435] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.140 Running I/O for 1 seconds... 00:07:19.073 1637.00 IOPS, 102.31 MiB/s 00:07:19.073 Latency(us) 00:07:19.073 [2024-10-08T16:14:37.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.073 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:19.073 Verification LBA range: start 0x0 length 0x400 00:07:19.073 Nvme0n1 : 1.07 1609.35 100.58 0.00 0.00 37692.46 10097.40 50875.35 00:07:19.073 [2024-10-08T16:14:37.360Z] =================================================================================================================== 00:07:19.073 [2024-10-08T16:14:37.360Z] Total : 1609.35 100.58 0.00 0.00 37692.46 10097.40 50875.35 00:07:19.330 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:19.330 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:19.330 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:19.330 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:19.587 rmmod nvme_tcp 00:07:19.587 rmmod nvme_fabrics 00:07:19.587 rmmod nvme_keyring 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1148339 ']' 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1148339 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1148339 ']' 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1148339 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1148339 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1148339' 00:07:19.587 killing process with pid 1148339 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1148339 00:07:19.587 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1148339 00:07:19.846 [2024-10-08 18:14:37.970826] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:19.846 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:19.846 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:19.846 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:19.846 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:19.846 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:19.846 18:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:19.846 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:19.846 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:19.846 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:19.846 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.846 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.846 18:14:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:22.380 00:07:22.380 real 0m9.079s 00:07:22.380 user 0m20.565s 00:07:22.380 sys 0m2.863s 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.380 ************************************ 00:07:22.380 END TEST nvmf_host_management 00:07:22.380 ************************************ 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.380 ************************************ 00:07:22.380 START TEST nvmf_lvol 00:07:22.380 ************************************ 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:22.380 * Looking for test storage... 00:07:22.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:22.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.380 --rc genhtml_branch_coverage=1 00:07:22.380 --rc genhtml_function_coverage=1 00:07:22.380 --rc genhtml_legend=1 00:07:22.380 --rc geninfo_all_blocks=1 00:07:22.380 --rc geninfo_unexecuted_blocks=1 00:07:22.380 00:07:22.380 ' 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:22.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.380 --rc genhtml_branch_coverage=1 00:07:22.380 --rc genhtml_function_coverage=1 00:07:22.380 --rc genhtml_legend=1 00:07:22.380 --rc geninfo_all_blocks=1 00:07:22.380 --rc geninfo_unexecuted_blocks=1 00:07:22.380 00:07:22.380 ' 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:22.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.380 --rc genhtml_branch_coverage=1 00:07:22.380 --rc genhtml_function_coverage=1 00:07:22.380 --rc genhtml_legend=1 00:07:22.380 --rc geninfo_all_blocks=1 00:07:22.380 --rc geninfo_unexecuted_blocks=1 00:07:22.380 00:07:22.380 ' 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:22.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.380 --rc genhtml_branch_coverage=1 00:07:22.380 --rc genhtml_function_coverage=1 00:07:22.380 --rc genhtml_legend=1 00:07:22.380 --rc geninfo_all_blocks=1 00:07:22.380 --rc geninfo_unexecuted_blocks=1 00:07:22.380 00:07:22.380 ' 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.380 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:22.381 18:14:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:24.285 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:24.285 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:24.285 Found net devices under 0000:09:00.0: cvl_0_0 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:24.285 Found net devices under 0000:09:00.1: cvl_0_1 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:24.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:24.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:07:24.285 00:07:24.285 --- 10.0.0.2 ping statistics --- 00:07:24.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.285 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:07:24.285 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:24.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:24.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:07:24.285 00:07:24.285 --- 10.0.0.1 ping statistics --- 00:07:24.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:24.286 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1150874 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1150874 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1150874 ']' 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.286 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:24.544 [2024-10-08 18:14:42.576648] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:07:24.544 [2024-10-08 18:14:42.576732] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.544 [2024-10-08 18:14:42.647830] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.544 [2024-10-08 18:14:42.760159] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.544 [2024-10-08 18:14:42.760214] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.544 [2024-10-08 18:14:42.760242] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.544 [2024-10-08 18:14:42.760253] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.544 [2024-10-08 18:14:42.760263] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.544 [2024-10-08 18:14:42.761122] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.544 [2024-10-08 18:14:42.761181] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.544 [2024-10-08 18:14:42.761185] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.802 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.802 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:24.802 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:24.802 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:24.802 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:24.802 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.802 18:14:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:25.060 [2024-10-08 18:14:43.149618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.060 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:25.318 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:25.318 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:25.576 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:25.576 18:14:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:25.835 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:26.093 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=47fda09d-d9e5-46e1-a2ea-9bf92e3a283b 00:07:26.093 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 47fda09d-d9e5-46e1-a2ea-9bf92e3a283b lvol 20 00:07:26.352 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=55cbb567-8b8d-47a2-8759-b2909f2abc3e 00:07:26.352 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:26.644 18:14:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 55cbb567-8b8d-47a2-8759-b2909f2abc3e 00:07:26.926 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:27.183 [2024-10-08 18:14:45.394774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.184 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.441 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1151305 00:07:27.441 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:27.441 18:14:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:28.813 18:14:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 55cbb567-8b8d-47a2-8759-b2909f2abc3e MY_SNAPSHOT 00:07:28.813 18:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b6865959-6057-400a-8e9e-89acbcf84804 00:07:28.813 18:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 55cbb567-8b8d-47a2-8759-b2909f2abc3e 30 00:07:29.071 18:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b6865959-6057-400a-8e9e-89acbcf84804 MY_CLONE 00:07:29.328 18:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c9939b1e-555c-42a8-9b4f-4220e1653ec8 00:07:29.328 18:14:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c9939b1e-555c-42a8-9b4f-4220e1653ec8 00:07:30.261 18:14:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1151305 00:07:38.369 Initializing NVMe Controllers 00:07:38.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:38.369 Controller IO queue size 128, less than required. 00:07:38.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:38.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:38.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:38.369 Initialization complete. Launching workers. 00:07:38.369 ======================================================== 00:07:38.369 Latency(us) 00:07:38.369 Device Information : IOPS MiB/s Average min max 00:07:38.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10420.20 40.70 12287.46 2093.45 71320.15 00:07:38.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10317.60 40.30 12412.92 2203.56 67852.21 00:07:38.369 ======================================================== 00:07:38.369 Total : 20737.81 81.01 12349.88 2093.45 71320.15 00:07:38.369 00:07:38.369 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:38.369 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 55cbb567-8b8d-47a2-8759-b2909f2abc3e 00:07:38.627 18:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47fda09d-d9e5-46e1-a2ea-9bf92e3a283b 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.885 rmmod nvme_tcp 00:07:38.885 rmmod nvme_fabrics 00:07:38.885 rmmod nvme_keyring 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1150874 ']' 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1150874 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1150874 ']' 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1150874 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1150874 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1150874' 00:07:38.885 killing process with pid 1150874 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1150874 00:07:38.885 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1150874 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.451 18:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:41.356 00:07:41.356 real 0m19.394s 00:07:41.356 user 1m6.190s 00:07:41.356 sys 0m5.357s 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:41.356 ************************************ 00:07:41.356 END TEST nvmf_lvol 00:07:41.356 ************************************ 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.356 ************************************ 00:07:41.356 START TEST nvmf_lvs_grow 00:07:41.356 ************************************ 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:41.356 * Looking for test storage... 00:07:41.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:07:41.356 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.615 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.616 18:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:44.147 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:44.147 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:44.147 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:44.148 Found net devices under 0000:09:00.0: cvl_0_0 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:44.148 Found net devices under 0000:09:00.1: cvl_0_1 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:44.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:07:44.148 00:07:44.148 --- 10.0.0.2 ping statistics --- 00:07:44.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.148 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:07:44.148 00:07:44.148 --- 10.0.0.1 ping statistics --- 00:07:44.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.148 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1154706 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1154706 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1154706 ']' 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.148 18:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.148 [2024-10-08 18:15:02.039636] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:07:44.148 [2024-10-08 18:15:02.039723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.148 [2024-10-08 18:15:02.104144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.148 [2024-10-08 18:15:02.208878] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.148 [2024-10-08 18:15:02.208935] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.148 [2024-10-08 18:15:02.208962] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.148 [2024-10-08 18:15:02.208982] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.148 [2024-10-08 18:15:02.208992] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.148 [2024-10-08 18:15:02.209610] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.148 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.148 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:44.148 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:44.148 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:44.148 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.148 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.148 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:44.443 [2024-10-08 18:15:02.602391] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.443 ************************************ 00:07:44.443 START TEST lvs_grow_clean 00:07:44.443 ************************************ 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:44.443 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:44.702 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:44.702 18:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:44.960 18:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2607fef0-2105-4d39-9a4a-322133382961 00:07:44.960 18:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2607fef0-2105-4d39-9a4a-322133382961 00:07:44.960 18:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:45.534 18:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:45.534 18:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:45.534 18:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2607fef0-2105-4d39-9a4a-322133382961 lvol 150 00:07:45.535 18:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0efef6d3-15ef-422f-9c7a-e4f613869a9e 00:07:45.535 18:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:45.535 18:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:45.794 [2024-10-08 18:15:04.059824] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:45.794 [2024-10-08 18:15:04.059919] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:45.794 true 00:07:45.794 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2607fef0-2105-4d39-9a4a-322133382961 00:07:45.794 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:46.360 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:46.360 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:46.360 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0efef6d3-15ef-422f-9c7a-e4f613869a9e 00:07:46.925 18:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:46.925 [2024-10-08 18:15:05.159134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.925 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.183 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1155144 00:07:47.183 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:47.183 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.183 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1155144 /var/tmp/bdevperf.sock 00:07:47.183 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1155144 ']' 00:07:47.183 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.183 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:47.183 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.183 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:47.183 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:47.444 [2024-10-08 18:15:05.497861] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:07:47.444 [2024-10-08 18:15:05.497941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155144 ] 00:07:47.444 [2024-10-08 18:15:05.556082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.444 [2024-10-08 18:15:05.666133] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.704 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.704 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:47.704 18:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:47.962 Nvme0n1 00:07:47.962 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:48.220 [ 00:07:48.220 { 00:07:48.220 "name": "Nvme0n1", 00:07:48.220 "aliases": [ 00:07:48.220 "0efef6d3-15ef-422f-9c7a-e4f613869a9e" 00:07:48.220 ], 00:07:48.220 "product_name": "NVMe disk", 00:07:48.220 "block_size": 4096, 00:07:48.220 "num_blocks": 38912, 00:07:48.220 "uuid": "0efef6d3-15ef-422f-9c7a-e4f613869a9e", 00:07:48.220 "numa_id": 0, 00:07:48.220 "assigned_rate_limits": { 00:07:48.220 "rw_ios_per_sec": 0, 00:07:48.220 "rw_mbytes_per_sec": 0, 00:07:48.220 "r_mbytes_per_sec": 0, 00:07:48.220 "w_mbytes_per_sec": 0 00:07:48.220 }, 00:07:48.220 "claimed": false, 00:07:48.220 "zoned": false, 00:07:48.220 "supported_io_types": { 00:07:48.220 "read": true, 00:07:48.220 "write": true, 00:07:48.220 "unmap": true, 00:07:48.220 "flush": true, 00:07:48.220 "reset": true, 00:07:48.220 "nvme_admin": true, 00:07:48.220 "nvme_io": true, 00:07:48.220 "nvme_io_md": false, 00:07:48.220 "write_zeroes": true, 00:07:48.220 "zcopy": false, 00:07:48.220 "get_zone_info": false, 00:07:48.220 "zone_management": false, 00:07:48.220 "zone_append": false, 00:07:48.220 "compare": true, 00:07:48.220 "compare_and_write": true, 00:07:48.220 "abort": true, 00:07:48.220 "seek_hole": false, 00:07:48.220 "seek_data": false, 00:07:48.220 "copy": true, 00:07:48.220 "nvme_iov_md": false 00:07:48.220 }, 00:07:48.220 "memory_domains": [ 00:07:48.220 { 00:07:48.220 "dma_device_id": "system", 00:07:48.220 "dma_device_type": 1 00:07:48.220 } 00:07:48.220 ], 00:07:48.220 "driver_specific": { 00:07:48.220 "nvme": [ 00:07:48.220 { 00:07:48.220 "trid": { 00:07:48.220 "trtype": "TCP", 00:07:48.220 "adrfam": "IPv4", 00:07:48.220 "traddr": "10.0.0.2", 00:07:48.220 "trsvcid": "4420", 00:07:48.220 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:48.220 }, 00:07:48.220 "ctrlr_data": { 00:07:48.220 "cntlid": 1, 00:07:48.220 "vendor_id": "0x8086", 00:07:48.220 "model_number": "SPDK bdev Controller", 00:07:48.220 "serial_number": "SPDK0", 00:07:48.220 "firmware_revision": "25.01", 00:07:48.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.220 "oacs": { 00:07:48.220 "security": 0, 00:07:48.220 "format": 0, 00:07:48.220 "firmware": 0, 00:07:48.220 "ns_manage": 0 00:07:48.220 }, 00:07:48.220 "multi_ctrlr": true, 00:07:48.220 "ana_reporting": false 00:07:48.220 }, 00:07:48.220 "vs": { 00:07:48.220 "nvme_version": "1.3" 00:07:48.220 }, 00:07:48.220 "ns_data": { 00:07:48.220 "id": 1, 00:07:48.220 "can_share": true 00:07:48.220 } 00:07:48.220 } 00:07:48.220 ], 00:07:48.220 "mp_policy": "active_passive" 00:07:48.220 } 00:07:48.220 } 00:07:48.220 ] 00:07:48.220 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1155278 00:07:48.220 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:48.220 18:15:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:48.220 Running I/O for 10 seconds... 00:07:49.593 Latency(us) 00:07:49.593 [2024-10-08T16:15:07.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.593 Nvme0n1 : 1.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:49.593 [2024-10-08T16:15:07.880Z] =================================================================================================================== 00:07:49.593 [2024-10-08T16:15:07.880Z] Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:07:49.593 00:07:50.158 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2607fef0-2105-4d39-9a4a-322133382961 00:07:50.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.416 Nvme0n1 : 2.00 15050.00 58.79 0.00 0.00 0.00 0.00 0.00 00:07:50.416 [2024-10-08T16:15:08.703Z] =================================================================================================================== 00:07:50.416 [2024-10-08T16:15:08.703Z] Total : 15050.00 58.79 0.00 0.00 0.00 0.00 0.00 00:07:50.416 00:07:50.416 true 00:07:50.416 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2607fef0-2105-4d39-9a4a-322133382961 00:07:50.416 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:50.981 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:50.981 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:50.981 18:15:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1155278 00:07:51.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.239 Nvme0n1 : 3.00 15155.67 59.20 0.00 0.00 0.00 0.00 0.00 00:07:51.239 [2024-10-08T16:15:09.526Z] =================================================================================================================== 00:07:51.239 [2024-10-08T16:15:09.526Z] Total : 15155.67 59.20 0.00 0.00 0.00 0.00 0.00 00:07:51.239 00:07:52.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.611 Nvme0n1 : 4.00 15176.75 59.28 0.00 0.00 0.00 0.00 0.00 00:07:52.611 [2024-10-08T16:15:10.898Z] =================================================================================================================== 00:07:52.611 [2024-10-08T16:15:10.898Z] Total : 15176.75 59.28 0.00 0.00 0.00 0.00 0.00 00:07:52.611 00:07:53.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.544 Nvme0n1 : 5.00 15278.40 59.68 0.00 0.00 0.00 0.00 0.00 00:07:53.544 [2024-10-08T16:15:11.831Z] =================================================================================================================== 00:07:53.544 [2024-10-08T16:15:11.831Z] Total : 15278.40 59.68 0.00 0.00 0.00 0.00 0.00 00:07:53.544 00:07:54.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.478 Nvme0n1 : 6.00 15367.17 60.03 0.00 0.00 0.00 0.00 0.00 00:07:54.478 [2024-10-08T16:15:12.765Z] =================================================================================================================== 00:07:54.478 [2024-10-08T16:15:12.765Z] Total : 15367.17 60.03 0.00 0.00 0.00 0.00 0.00 00:07:54.478 00:07:55.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.411 Nvme0n1 : 7.00 15439.71 60.31 0.00 0.00 0.00 0.00 0.00 00:07:55.411 [2024-10-08T16:15:13.698Z] =================================================================================================================== 00:07:55.411 [2024-10-08T16:15:13.698Z] Total : 15439.71 60.31 0.00 0.00 0.00 0.00 0.00 00:07:55.411 00:07:56.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.344 Nvme0n1 : 8.00 15494.12 60.52 0.00 0.00 0.00 0.00 0.00 00:07:56.344 [2024-10-08T16:15:14.631Z] =================================================================================================================== 00:07:56.344 [2024-10-08T16:15:14.631Z] Total : 15494.12 60.52 0.00 0.00 0.00 0.00 0.00 00:07:56.344 00:07:57.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.277 Nvme0n1 : 9.00 15529.89 60.66 0.00 0.00 0.00 0.00 0.00 00:07:57.277 [2024-10-08T16:15:15.564Z] =================================================================================================================== 00:07:57.277 [2024-10-08T16:15:15.564Z] Total : 15529.89 60.66 0.00 0.00 0.00 0.00 0.00 00:07:57.277 00:07:58.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.651 Nvme0n1 : 10.00 15564.40 60.80 0.00 0.00 0.00 0.00 0.00 00:07:58.651 [2024-10-08T16:15:16.938Z] =================================================================================================================== 00:07:58.651 [2024-10-08T16:15:16.938Z] Total : 15564.40 60.80 0.00 0.00 0.00 0.00 0.00 00:07:58.651 00:07:58.651 00:07:58.651 Latency(us) 00:07:58.651 [2024-10-08T16:15:16.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.651 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.651 Nvme0n1 : 10.01 15567.81 60.81 0.00 0.00 8217.49 5024.43 20777.34 00:07:58.651 [2024-10-08T16:15:16.938Z] =================================================================================================================== 00:07:58.651 [2024-10-08T16:15:16.938Z] Total : 15567.81 60.81 0.00 0.00 8217.49 5024.43 20777.34 00:07:58.651 { 00:07:58.651 "results": [ 00:07:58.651 { 00:07:58.651 "job": "Nvme0n1", 00:07:58.651 "core_mask": "0x2", 00:07:58.651 "workload": "randwrite", 00:07:58.651 "status": "finished", 00:07:58.651 "queue_depth": 128, 00:07:58.651 "io_size": 4096, 00:07:58.651 "runtime": 10.006033, 00:07:58.651 "iops": 15567.807941468911, 00:07:58.651 "mibps": 60.81174977136293, 00:07:58.651 "io_failed": 0, 00:07:58.651 "io_timeout": 0, 00:07:58.651 "avg_latency_us": 8217.491115010447, 00:07:58.651 "min_latency_us": 5024.426666666666, 00:07:58.651 "max_latency_us": 20777.33925925926 00:07:58.651 } 00:07:58.651 ], 00:07:58.651 "core_count": 1 00:07:58.651 } 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1155144 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1155144 ']' 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1155144 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1155144 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1155144' 00:07:58.651 killing process with pid 1155144 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1155144 00:07:58.651 Received shutdown signal, test time was about 10.000000 seconds 00:07:58.651 00:07:58.651 Latency(us) 00:07:58.651 [2024-10-08T16:15:16.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.651 [2024-10-08T16:15:16.938Z] =================================================================================================================== 00:07:58.651 [2024-10-08T16:15:16.938Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1155144 00:07:58.651 18:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:58.909 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.167 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2607fef0-2105-4d39-9a4a-322133382961 00:07:59.167 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:59.425 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:59.425 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:59.425 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:59.683 [2024-10-08 18:15:17.937062] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:59.683 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2607fef0-2105-4d39-9a4a-322133382961 00:07:59.683 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:59.683 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2607fef0-2105-4d39-9a4a-322133382961 00:07:59.683 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.941 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.941 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.941 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.941 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.941 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.941 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.941 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:59.941 18:15:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2607fef0-2105-4d39-9a4a-322133382961 00:08:00.202 request: 00:08:00.202 { 00:08:00.202 "uuid": "2607fef0-2105-4d39-9a4a-322133382961", 00:08:00.202 "method": "bdev_lvol_get_lvstores", 00:08:00.202 "req_id": 1 00:08:00.202 } 00:08:00.202 Got JSON-RPC error response 00:08:00.202 response: 00:08:00.202 { 00:08:00.202 "code": -19, 00:08:00.202 "message": "No such device" 00:08:00.202 } 00:08:00.202 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:00.202 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:00.202 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:00.202 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:00.202 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:00.459 aio_bdev 00:08:00.459 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0efef6d3-15ef-422f-9c7a-e4f613869a9e 00:08:00.459 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0efef6d3-15ef-422f-9c7a-e4f613869a9e 00:08:00.459 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:00.459 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:00.459 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:00.459 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:00.459 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:00.717 18:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0efef6d3-15ef-422f-9c7a-e4f613869a9e -t 2000 00:08:00.975 [ 00:08:00.975 { 00:08:00.975 "name": "0efef6d3-15ef-422f-9c7a-e4f613869a9e", 00:08:00.975 "aliases": [ 00:08:00.975 "lvs/lvol" 00:08:00.975 ], 00:08:00.975 "product_name": "Logical Volume", 00:08:00.975 "block_size": 4096, 00:08:00.975 "num_blocks": 38912, 00:08:00.975 "uuid": "0efef6d3-15ef-422f-9c7a-e4f613869a9e", 00:08:00.975 "assigned_rate_limits": { 00:08:00.975 "rw_ios_per_sec": 0, 00:08:00.975 "rw_mbytes_per_sec": 0, 00:08:00.975 "r_mbytes_per_sec": 0, 00:08:00.975 "w_mbytes_per_sec": 0 00:08:00.975 }, 00:08:00.975 "claimed": false, 00:08:00.975 "zoned": false, 00:08:00.975 "supported_io_types": { 00:08:00.975 "read": true, 00:08:00.975 "write": true, 00:08:00.975 "unmap": true, 00:08:00.975 "flush": false, 00:08:00.975 "reset": true, 00:08:00.975 "nvme_admin": false, 00:08:00.975 "nvme_io": false, 00:08:00.975 "nvme_io_md": false, 00:08:00.975 "write_zeroes": true, 00:08:00.975 "zcopy": false, 00:08:00.975 "get_zone_info": false, 00:08:00.975 "zone_management": false, 00:08:00.975 "zone_append": false, 00:08:00.975 "compare": false, 00:08:00.975 "compare_and_write": false, 00:08:00.975 "abort": false, 00:08:00.975 "seek_hole": true, 00:08:00.975 "seek_data": true, 00:08:00.975 "copy": false, 00:08:00.975 "nvme_iov_md": false 00:08:00.975 }, 00:08:00.975 "driver_specific": { 00:08:00.975 "lvol": { 00:08:00.975 "lvol_store_uuid": "2607fef0-2105-4d39-9a4a-322133382961", 00:08:00.975 "base_bdev": "aio_bdev", 00:08:00.975 "thin_provision": false, 00:08:00.975 "num_allocated_clusters": 38, 00:08:00.975 "snapshot": false, 00:08:00.975 "clone": false, 00:08:00.975 "esnap_clone": false 00:08:00.975 } 00:08:00.975 } 00:08:00.975 } 00:08:00.975 ] 00:08:00.975 18:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:00.975 18:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2607fef0-2105-4d39-9a4a-322133382961 00:08:00.975 18:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:01.234 18:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:01.234 18:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2607fef0-2105-4d39-9a4a-322133382961 00:08:01.234 18:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:01.492 18:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:01.492 18:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0efef6d3-15ef-422f-9c7a-e4f613869a9e 00:08:01.750 18:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2607fef0-2105-4d39-9a4a-322133382961 00:08:02.008 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.266 00:08:02.266 real 0m17.776s 00:08:02.266 user 0m17.286s 00:08:02.266 sys 0m1.889s 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:02.266 ************************************ 00:08:02.266 END TEST lvs_grow_clean 00:08:02.266 ************************************ 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:02.266 ************************************ 00:08:02.266 START TEST lvs_grow_dirty 00:08:02.266 ************************************ 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.266 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.524 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:02.524 18:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:02.782 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:02.782 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:02.782 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:03.040 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:03.040 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:03.040 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a lvol 150 00:08:03.633 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0dd9dc92-7780-446b-95de-bab046cfbffd 00:08:03.633 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.633 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:03.633 [2024-10-08 18:15:21.848683] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:03.633 [2024-10-08 18:15:21.848775] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:03.633 true 00:08:03.633 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:03.633 18:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:03.935 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:03.935 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.219 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0dd9dc92-7780-446b-95de-bab046cfbffd 00:08:04.477 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:04.735 [2024-10-08 18:15:22.931949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.735 18:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.993 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1157860 00:08:04.993 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:04.993 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:04.993 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1157860 /var/tmp/bdevperf.sock 00:08:04.993 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1157860 ']' 00:08:04.993 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:04.993 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.993 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:04.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:04.993 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.993 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:04.993 [2024-10-08 18:15:23.253602] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:04.993 [2024-10-08 18:15:23.253690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157860 ] 00:08:05.251 [2024-10-08 18:15:23.311154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.251 [2024-10-08 18:15:23.417089] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.251 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.251 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:05.251 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:05.817 Nvme0n1 00:08:05.817 18:15:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:06.075 [ 00:08:06.075 { 00:08:06.075 "name": "Nvme0n1", 00:08:06.075 "aliases": [ 00:08:06.075 "0dd9dc92-7780-446b-95de-bab046cfbffd" 00:08:06.075 ], 00:08:06.075 "product_name": "NVMe disk", 00:08:06.075 "block_size": 4096, 00:08:06.075 "num_blocks": 38912, 00:08:06.075 "uuid": "0dd9dc92-7780-446b-95de-bab046cfbffd", 00:08:06.075 "numa_id": 0, 00:08:06.075 "assigned_rate_limits": { 00:08:06.075 "rw_ios_per_sec": 0, 00:08:06.075 "rw_mbytes_per_sec": 0, 00:08:06.075 "r_mbytes_per_sec": 0, 00:08:06.075 "w_mbytes_per_sec": 0 00:08:06.075 }, 00:08:06.075 "claimed": false, 00:08:06.075 "zoned": false, 00:08:06.075 "supported_io_types": { 00:08:06.075 "read": true, 00:08:06.075 "write": true, 00:08:06.075 "unmap": true, 00:08:06.075 "flush": true, 00:08:06.075 "reset": true, 00:08:06.075 "nvme_admin": true, 00:08:06.075 "nvme_io": true, 00:08:06.075 "nvme_io_md": false, 00:08:06.075 "write_zeroes": true, 00:08:06.075 "zcopy": false, 00:08:06.075 "get_zone_info": false, 00:08:06.075 "zone_management": false, 00:08:06.075 "zone_append": false, 00:08:06.075 "compare": true, 00:08:06.075 "compare_and_write": true, 00:08:06.075 "abort": true, 00:08:06.075 "seek_hole": false, 00:08:06.075 "seek_data": false, 00:08:06.075 "copy": true, 00:08:06.075 "nvme_iov_md": false 00:08:06.075 }, 00:08:06.075 "memory_domains": [ 00:08:06.075 { 00:08:06.075 "dma_device_id": "system", 00:08:06.075 "dma_device_type": 1 00:08:06.075 } 00:08:06.075 ], 00:08:06.075 "driver_specific": { 00:08:06.075 "nvme": [ 00:08:06.075 { 00:08:06.075 "trid": { 00:08:06.075 "trtype": "TCP", 00:08:06.075 "adrfam": "IPv4", 00:08:06.075 "traddr": "10.0.0.2", 00:08:06.075 "trsvcid": "4420", 00:08:06.075 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:06.075 }, 00:08:06.075 "ctrlr_data": { 00:08:06.075 "cntlid": 1, 00:08:06.075 "vendor_id": "0x8086", 00:08:06.075 "model_number": "SPDK bdev Controller", 00:08:06.075 "serial_number": "SPDK0", 00:08:06.075 "firmware_revision": "25.01", 00:08:06.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:06.075 "oacs": { 00:08:06.075 "security": 0, 00:08:06.075 "format": 0, 00:08:06.075 "firmware": 0, 00:08:06.075 "ns_manage": 0 00:08:06.075 }, 00:08:06.075 "multi_ctrlr": true, 00:08:06.075 "ana_reporting": false 00:08:06.075 }, 00:08:06.075 "vs": { 00:08:06.075 "nvme_version": "1.3" 00:08:06.075 }, 00:08:06.075 "ns_data": { 00:08:06.075 "id": 1, 00:08:06.075 "can_share": true 00:08:06.075 } 00:08:06.075 } 00:08:06.075 ], 00:08:06.075 "mp_policy": "active_passive" 00:08:06.075 } 00:08:06.075 } 00:08:06.075 ] 00:08:06.075 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1157996 00:08:06.075 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:06.075 18:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:06.075 Running I/O for 10 seconds... 00:08:07.450 Latency(us) 00:08:07.450 [2024-10-08T16:15:25.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.450 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:08:07.450 [2024-10-08T16:15:25.737Z] =================================================================================================================== 00:08:07.450 [2024-10-08T16:15:25.737Z] Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:08:07.450 00:08:08.015 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:08.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.273 Nvme0n1 : 2.00 15240.50 59.53 0.00 0.00 0.00 0.00 0.00 00:08:08.273 [2024-10-08T16:15:26.560Z] =================================================================================================================== 00:08:08.273 [2024-10-08T16:15:26.560Z] Total : 15240.50 59.53 0.00 0.00 0.00 0.00 0.00 00:08:08.273 00:08:08.273 true 00:08:08.273 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:08.273 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:08.839 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:08.839 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:08.839 18:15:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1157996 00:08:09.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.097 Nvme0n1 : 3.00 15346.67 59.95 0.00 0.00 0.00 0.00 0.00 00:08:09.097 [2024-10-08T16:15:27.384Z] =================================================================================================================== 00:08:09.097 [2024-10-08T16:15:27.384Z] Total : 15346.67 59.95 0.00 0.00 0.00 0.00 0.00 00:08:09.097 00:08:10.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.470 Nvme0n1 : 4.00 15447.00 60.34 0.00 0.00 0.00 0.00 0.00 00:08:10.470 [2024-10-08T16:15:28.757Z] =================================================================================================================== 00:08:10.470 [2024-10-08T16:15:28.757Z] Total : 15447.00 60.34 0.00 0.00 0.00 0.00 0.00 00:08:10.470 00:08:11.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.405 Nvme0n1 : 5.00 15511.20 60.59 0.00 0.00 0.00 0.00 0.00 00:08:11.405 [2024-10-08T16:15:29.692Z] =================================================================================================================== 00:08:11.405 [2024-10-08T16:15:29.692Z] Total : 15511.20 60.59 0.00 0.00 0.00 0.00 0.00 00:08:11.405 00:08:12.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.339 Nvme0n1 : 6.00 15562.00 60.79 0.00 0.00 0.00 0.00 0.00 00:08:12.339 [2024-10-08T16:15:30.626Z] =================================================================================================================== 00:08:12.339 [2024-10-08T16:15:30.626Z] Total : 15562.00 60.79 0.00 0.00 0.00 0.00 0.00 00:08:12.339 00:08:13.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.272 Nvme0n1 : 7.00 15606.71 60.96 0.00 0.00 0.00 0.00 0.00 00:08:13.272 [2024-10-08T16:15:31.559Z] =================================================================================================================== 00:08:13.272 [2024-10-08T16:15:31.559Z] Total : 15606.71 60.96 0.00 0.00 0.00 0.00 0.00 00:08:13.272 00:08:14.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.207 Nvme0n1 : 8.00 15656.12 61.16 0.00 0.00 0.00 0.00 0.00 00:08:14.207 [2024-10-08T16:15:32.494Z] =================================================================================================================== 00:08:14.207 [2024-10-08T16:15:32.494Z] Total : 15656.12 61.16 0.00 0.00 0.00 0.00 0.00 00:08:14.207 00:08:15.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.141 Nvme0n1 : 9.00 15694.56 61.31 0.00 0.00 0.00 0.00 0.00 00:08:15.141 [2024-10-08T16:15:33.428Z] =================================================================================================================== 00:08:15.141 [2024-10-08T16:15:33.428Z] Total : 15694.56 61.31 0.00 0.00 0.00 0.00 0.00 00:08:15.141 00:08:16.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.515 Nvme0n1 : 10.00 15725.30 61.43 0.00 0.00 0.00 0.00 0.00 00:08:16.515 [2024-10-08T16:15:34.802Z] =================================================================================================================== 00:08:16.515 [2024-10-08T16:15:34.802Z] Total : 15725.30 61.43 0.00 0.00 0.00 0.00 0.00 00:08:16.515 00:08:16.515 00:08:16.515 Latency(us) 00:08:16.515 [2024-10-08T16:15:34.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.515 Nvme0n1 : 10.00 15730.92 61.45 0.00 0.00 8132.34 4296.25 17670.45 00:08:16.515 [2024-10-08T16:15:34.802Z] =================================================================================================================== 00:08:16.515 [2024-10-08T16:15:34.802Z] Total : 15730.92 61.45 0.00 0.00 8132.34 4296.25 17670.45 00:08:16.515 { 00:08:16.515 "results": [ 00:08:16.515 { 00:08:16.515 "job": "Nvme0n1", 00:08:16.515 "core_mask": "0x2", 00:08:16.515 "workload": "randwrite", 00:08:16.515 "status": "finished", 00:08:16.515 "queue_depth": 128, 00:08:16.515 "io_size": 4096, 00:08:16.515 "runtime": 10.004567, 00:08:16.515 "iops": 15730.91569080401, 00:08:16.515 "mibps": 61.44888941720316, 00:08:16.515 "io_failed": 0, 00:08:16.515 "io_timeout": 0, 00:08:16.515 "avg_latency_us": 8132.343210858668, 00:08:16.515 "min_latency_us": 4296.248888888889, 00:08:16.515 "max_latency_us": 17670.447407407406 00:08:16.515 } 00:08:16.515 ], 00:08:16.515 "core_count": 1 00:08:16.515 } 00:08:16.515 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1157860 00:08:16.515 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1157860 ']' 00:08:16.515 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1157860 00:08:16.516 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:16.516 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.516 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1157860 00:08:16.516 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:16.516 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:16.516 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1157860' 00:08:16.516 killing process with pid 1157860 00:08:16.516 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1157860 00:08:16.516 Received shutdown signal, test time was about 10.000000 seconds 00:08:16.516 00:08:16.516 Latency(us) 00:08:16.516 [2024-10-08T16:15:34.803Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.516 [2024-10-08T16:15:34.803Z] =================================================================================================================== 00:08:16.516 [2024-10-08T16:15:34.803Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:16.516 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1157860 00:08:16.516 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.774 18:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:17.032 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:17.032 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1154706 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1154706 00:08:17.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1154706 Killed "${NVMF_APP[@]}" "$@" 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1159334 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1159334 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1159334 ']' 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.290 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:17.549 [2024-10-08 18:15:35.599722] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:17.549 [2024-10-08 18:15:35.599813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.549 [2024-10-08 18:15:35.665079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.549 [2024-10-08 18:15:35.770690] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.549 [2024-10-08 18:15:35.770755] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.549 [2024-10-08 18:15:35.770784] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.549 [2024-10-08 18:15:35.770795] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.549 [2024-10-08 18:15:35.770805] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.549 [2024-10-08 18:15:35.771449] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.807 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.807 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:17.807 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:17.807 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:17.807 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:17.807 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.807 18:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.065 [2024-10-08 18:15:36.158373] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:18.065 [2024-10-08 18:15:36.158516] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:18.065 [2024-10-08 18:15:36.158563] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:18.065 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:18.065 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0dd9dc92-7780-446b-95de-bab046cfbffd 00:08:18.065 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0dd9dc92-7780-446b-95de-bab046cfbffd 00:08:18.065 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:18.065 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:18.065 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:18.065 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:18.065 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:18.323 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0dd9dc92-7780-446b-95de-bab046cfbffd -t 2000 00:08:18.580 [ 00:08:18.580 { 00:08:18.580 "name": "0dd9dc92-7780-446b-95de-bab046cfbffd", 00:08:18.580 "aliases": [ 00:08:18.580 "lvs/lvol" 00:08:18.580 ], 00:08:18.580 "product_name": "Logical Volume", 00:08:18.580 "block_size": 4096, 00:08:18.580 "num_blocks": 38912, 00:08:18.580 "uuid": "0dd9dc92-7780-446b-95de-bab046cfbffd", 00:08:18.580 "assigned_rate_limits": { 00:08:18.580 "rw_ios_per_sec": 0, 00:08:18.580 "rw_mbytes_per_sec": 0, 00:08:18.580 "r_mbytes_per_sec": 0, 00:08:18.580 "w_mbytes_per_sec": 0 00:08:18.580 }, 00:08:18.580 "claimed": false, 00:08:18.580 "zoned": false, 00:08:18.580 "supported_io_types": { 00:08:18.580 "read": true, 00:08:18.580 "write": true, 00:08:18.580 "unmap": true, 00:08:18.580 "flush": false, 00:08:18.580 "reset": true, 00:08:18.580 "nvme_admin": false, 00:08:18.580 "nvme_io": false, 00:08:18.580 "nvme_io_md": false, 00:08:18.580 "write_zeroes": true, 00:08:18.580 "zcopy": false, 00:08:18.580 "get_zone_info": false, 00:08:18.580 "zone_management": false, 00:08:18.580 "zone_append": false, 00:08:18.580 "compare": false, 00:08:18.580 "compare_and_write": false, 00:08:18.580 "abort": false, 00:08:18.580 "seek_hole": true, 00:08:18.580 "seek_data": true, 00:08:18.580 "copy": false, 00:08:18.580 "nvme_iov_md": false 00:08:18.580 }, 00:08:18.580 "driver_specific": { 00:08:18.580 "lvol": { 00:08:18.580 "lvol_store_uuid": "2ba41cfb-4896-4e45-837f-7a3281e3b58a", 00:08:18.580 "base_bdev": "aio_bdev", 00:08:18.580 "thin_provision": false, 00:08:18.580 "num_allocated_clusters": 38, 00:08:18.580 "snapshot": false, 00:08:18.580 "clone": false, 00:08:18.580 "esnap_clone": false 00:08:18.580 } 00:08:18.580 } 00:08:18.580 } 00:08:18.580 ] 00:08:18.580 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:18.580 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:18.580 18:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:18.838 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:18.838 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:18.838 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:19.096 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:19.096 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:19.354 [2024-10-08 18:15:37.528272] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:19.354 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:19.612 request: 00:08:19.612 { 00:08:19.612 "uuid": "2ba41cfb-4896-4e45-837f-7a3281e3b58a", 00:08:19.612 "method": "bdev_lvol_get_lvstores", 00:08:19.612 "req_id": 1 00:08:19.612 } 00:08:19.612 Got JSON-RPC error response 00:08:19.612 response: 00:08:19.612 { 00:08:19.612 "code": -19, 00:08:19.612 "message": "No such device" 00:08:19.612 } 00:08:19.612 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:19.612 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.612 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.612 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.612 18:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.870 aio_bdev 00:08:19.870 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0dd9dc92-7780-446b-95de-bab046cfbffd 00:08:19.870 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=0dd9dc92-7780-446b-95de-bab046cfbffd 00:08:19.870 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:19.870 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:19.870 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:19.870 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:19.870 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:20.128 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0dd9dc92-7780-446b-95de-bab046cfbffd -t 2000 00:08:20.386 [ 00:08:20.386 { 00:08:20.386 "name": "0dd9dc92-7780-446b-95de-bab046cfbffd", 00:08:20.386 "aliases": [ 00:08:20.386 "lvs/lvol" 00:08:20.386 ], 00:08:20.386 "product_name": "Logical Volume", 00:08:20.386 "block_size": 4096, 00:08:20.386 "num_blocks": 38912, 00:08:20.386 "uuid": "0dd9dc92-7780-446b-95de-bab046cfbffd", 00:08:20.386 "assigned_rate_limits": { 00:08:20.386 "rw_ios_per_sec": 0, 00:08:20.386 "rw_mbytes_per_sec": 0, 00:08:20.386 "r_mbytes_per_sec": 0, 00:08:20.386 "w_mbytes_per_sec": 0 00:08:20.386 }, 00:08:20.386 "claimed": false, 00:08:20.386 "zoned": false, 00:08:20.386 "supported_io_types": { 00:08:20.386 "read": true, 00:08:20.386 "write": true, 00:08:20.386 "unmap": true, 00:08:20.386 "flush": false, 00:08:20.386 "reset": true, 00:08:20.386 "nvme_admin": false, 00:08:20.386 "nvme_io": false, 00:08:20.386 "nvme_io_md": false, 00:08:20.386 "write_zeroes": true, 00:08:20.386 "zcopy": false, 00:08:20.386 "get_zone_info": false, 00:08:20.386 "zone_management": false, 00:08:20.386 "zone_append": false, 00:08:20.386 "compare": false, 00:08:20.386 "compare_and_write": false, 00:08:20.386 "abort": false, 00:08:20.386 "seek_hole": true, 00:08:20.386 "seek_data": true, 00:08:20.386 "copy": false, 00:08:20.386 "nvme_iov_md": false 00:08:20.386 }, 00:08:20.386 "driver_specific": { 00:08:20.386 "lvol": { 00:08:20.386 "lvol_store_uuid": "2ba41cfb-4896-4e45-837f-7a3281e3b58a", 00:08:20.386 "base_bdev": "aio_bdev", 00:08:20.386 "thin_provision": false, 00:08:20.386 "num_allocated_clusters": 38, 00:08:20.386 "snapshot": false, 00:08:20.386 "clone": false, 00:08:20.386 "esnap_clone": false 00:08:20.386 } 00:08:20.386 } 00:08:20.386 } 00:08:20.386 ] 00:08:20.386 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:20.386 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:20.386 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:20.644 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:20.644 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:20.644 18:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:21.211 18:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:21.211 18:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0dd9dc92-7780-446b-95de-bab046cfbffd 00:08:21.211 18:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ba41cfb-4896-4e45-837f-7a3281e3b58a 00:08:21.469 18:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:22.035 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:22.035 00:08:22.035 real 0m19.576s 00:08:22.035 user 0m49.432s 00:08:22.035 sys 0m4.664s 00:08:22.035 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.035 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.035 ************************************ 00:08:22.035 END TEST lvs_grow_dirty 00:08:22.035 ************************************ 00:08:22.035 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:22.035 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:22.035 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:22.035 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:22.035 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:22.035 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:22.036 nvmf_trace.0 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.036 rmmod nvme_tcp 00:08:22.036 rmmod nvme_fabrics 00:08:22.036 rmmod nvme_keyring 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1159334 ']' 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1159334 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1159334 ']' 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1159334 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1159334 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1159334' 00:08:22.036 killing process with pid 1159334 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1159334 00:08:22.036 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1159334 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.296 18:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.834 00:08:24.834 real 0m42.995s 00:08:24.834 user 1m12.895s 00:08:24.834 sys 0m8.548s 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.834 ************************************ 00:08:24.834 END TEST nvmf_lvs_grow 00:08:24.834 ************************************ 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.834 ************************************ 00:08:24.834 START TEST nvmf_bdev_io_wait 00:08:24.834 ************************************ 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:24.834 * Looking for test storage... 00:08:24.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:24.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.834 --rc genhtml_branch_coverage=1 00:08:24.834 --rc genhtml_function_coverage=1 00:08:24.834 --rc genhtml_legend=1 00:08:24.834 --rc geninfo_all_blocks=1 00:08:24.834 --rc geninfo_unexecuted_blocks=1 00:08:24.834 00:08:24.834 ' 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:24.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.834 --rc genhtml_branch_coverage=1 00:08:24.834 --rc genhtml_function_coverage=1 00:08:24.834 --rc genhtml_legend=1 00:08:24.834 --rc geninfo_all_blocks=1 00:08:24.834 --rc geninfo_unexecuted_blocks=1 00:08:24.834 00:08:24.834 ' 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:24.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.834 --rc genhtml_branch_coverage=1 00:08:24.834 --rc genhtml_function_coverage=1 00:08:24.834 --rc genhtml_legend=1 00:08:24.834 --rc geninfo_all_blocks=1 00:08:24.834 --rc geninfo_unexecuted_blocks=1 00:08:24.834 00:08:24.834 ' 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:24.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.834 --rc genhtml_branch_coverage=1 00:08:24.834 --rc genhtml_function_coverage=1 00:08:24.834 --rc genhtml_legend=1 00:08:24.834 --rc geninfo_all_blocks=1 00:08:24.834 --rc geninfo_unexecuted_blocks=1 00:08:24.834 00:08:24.834 ' 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.834 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.835 18:15:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:26.741 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:26.742 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:26.742 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:26.742 Found net devices under 0000:09:00.0: cvl_0_0 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:26.742 Found net devices under 0000:09:00.1: cvl_0_1 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:26.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:08:26.742 00:08:26.742 --- 10.0.0.2 ping statistics --- 00:08:26.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.742 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:08:26.742 00:08:26.742 --- 10.0.0.1 ping statistics --- 00:08:26.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.742 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:26.742 18:15:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1161870 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1161870 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1161870 ']' 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.742 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.001 [2024-10-08 18:15:45.080776] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:27.001 [2024-10-08 18:15:45.080848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.001 [2024-10-08 18:15:45.145945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.001 [2024-10-08 18:15:45.251475] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.001 [2024-10-08 18:15:45.251533] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.001 [2024-10-08 18:15:45.251556] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.001 [2024-10-08 18:15:45.251581] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.001 [2024-10-08 18:15:45.251590] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.001 [2024-10-08 18:15:45.253236] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.001 [2024-10-08 18:15:45.253324] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.001 [2024-10-08 18:15:45.253359] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.001 [2024-10-08 18:15:45.253363] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.260 [2024-10-08 18:15:45.411170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.260 Malloc0 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:27.260 [2024-10-08 18:15:45.473693] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1161912 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1161914 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:27.260 { 00:08:27.260 "params": { 00:08:27.260 "name": "Nvme$subsystem", 00:08:27.260 "trtype": "$TEST_TRANSPORT", 00:08:27.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.260 "adrfam": "ipv4", 00:08:27.260 "trsvcid": "$NVMF_PORT", 00:08:27.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.260 "hdgst": ${hdgst:-false}, 00:08:27.260 "ddgst": ${ddgst:-false} 00:08:27.260 }, 00:08:27.260 "method": "bdev_nvme_attach_controller" 00:08:27.260 } 00:08:27.260 EOF 00:08:27.260 )") 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1161917 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:27.260 { 00:08:27.260 "params": { 00:08:27.260 "name": "Nvme$subsystem", 00:08:27.260 "trtype": "$TEST_TRANSPORT", 00:08:27.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.260 "adrfam": "ipv4", 00:08:27.260 "trsvcid": "$NVMF_PORT", 00:08:27.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.260 "hdgst": ${hdgst:-false}, 00:08:27.260 "ddgst": ${ddgst:-false} 00:08:27.260 }, 00:08:27.260 "method": "bdev_nvme_attach_controller" 00:08:27.260 } 00:08:27.260 EOF 00:08:27.260 )") 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1161920 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:27.260 { 00:08:27.260 "params": { 00:08:27.260 "name": "Nvme$subsystem", 00:08:27.260 "trtype": "$TEST_TRANSPORT", 00:08:27.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.260 "adrfam": "ipv4", 00:08:27.260 "trsvcid": "$NVMF_PORT", 00:08:27.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.260 "hdgst": ${hdgst:-false}, 00:08:27.260 "ddgst": ${ddgst:-false} 00:08:27.260 }, 00:08:27.260 "method": "bdev_nvme_attach_controller" 00:08:27.260 } 00:08:27.260 EOF 00:08:27.260 )") 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:27.260 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:27.260 { 00:08:27.260 "params": { 00:08:27.260 "name": "Nvme$subsystem", 00:08:27.260 "trtype": "$TEST_TRANSPORT", 00:08:27.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.260 "adrfam": "ipv4", 00:08:27.260 "trsvcid": "$NVMF_PORT", 00:08:27.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.261 "hdgst": ${hdgst:-false}, 00:08:27.261 "ddgst": ${ddgst:-false} 00:08:27.261 }, 00:08:27.261 "method": "bdev_nvme_attach_controller" 00:08:27.261 } 00:08:27.261 EOF 00:08:27.261 )") 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1161912 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:27.261 "params": { 00:08:27.261 "name": "Nvme1", 00:08:27.261 "trtype": "tcp", 00:08:27.261 "traddr": "10.0.0.2", 00:08:27.261 "adrfam": "ipv4", 00:08:27.261 "trsvcid": "4420", 00:08:27.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.261 "hdgst": false, 00:08:27.261 "ddgst": false 00:08:27.261 }, 00:08:27.261 "method": "bdev_nvme_attach_controller" 00:08:27.261 }' 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:27.261 "params": { 00:08:27.261 "name": "Nvme1", 00:08:27.261 "trtype": "tcp", 00:08:27.261 "traddr": "10.0.0.2", 00:08:27.261 "adrfam": "ipv4", 00:08:27.261 "trsvcid": "4420", 00:08:27.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.261 "hdgst": false, 00:08:27.261 "ddgst": false 00:08:27.261 }, 00:08:27.261 "method": "bdev_nvme_attach_controller" 00:08:27.261 }' 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:27.261 "params": { 00:08:27.261 "name": "Nvme1", 00:08:27.261 "trtype": "tcp", 00:08:27.261 "traddr": "10.0.0.2", 00:08:27.261 "adrfam": "ipv4", 00:08:27.261 "trsvcid": "4420", 00:08:27.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.261 "hdgst": false, 00:08:27.261 "ddgst": false 00:08:27.261 }, 00:08:27.261 "method": "bdev_nvme_attach_controller" 00:08:27.261 }' 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:27.261 18:15:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:27.261 "params": { 00:08:27.261 "name": "Nvme1", 00:08:27.261 "trtype": "tcp", 00:08:27.261 "traddr": "10.0.0.2", 00:08:27.261 "adrfam": "ipv4", 00:08:27.261 "trsvcid": "4420", 00:08:27.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.261 "hdgst": false, 00:08:27.261 "ddgst": false 00:08:27.261 }, 00:08:27.261 "method": "bdev_nvme_attach_controller" 00:08:27.261 }' 00:08:27.261 [2024-10-08 18:15:45.524819] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:27.261 [2024-10-08 18:15:45.524819] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:27.261 [2024-10-08 18:15:45.524908] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 18:15:45.524908] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:27.261 --proc-type=auto ] 00:08:27.261 [2024-10-08 18:15:45.524929] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:27.261 [2024-10-08 18:15:45.524929] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:27.261 [2024-10-08 18:15:45.525007] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 18:15:45.525007] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:27.261 --proc-type=auto ] 00:08:27.519 [2024-10-08 18:15:45.709217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.777 [2024-10-08 18:15:45.813248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:27.777 [2024-10-08 18:15:45.820225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.777 [2024-10-08 18:15:45.897377] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.777 [2024-10-08 18:15:45.926132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:08:27.777 [2024-10-08 18:15:45.977153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.777 [2024-10-08 18:15:45.993357] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:08:28.045 [2024-10-08 18:15:46.069778] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:08:28.045 Running I/O for 1 seconds... 00:08:28.045 Running I/O for 1 seconds... 00:08:28.389 Running I/O for 1 seconds... 00:08:28.389 Running I/O for 1 seconds... 00:08:29.322 9689.00 IOPS, 37.85 MiB/s 00:08:29.322 Latency(us) 00:08:29.322 [2024-10-08T16:15:47.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.322 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:29.322 Nvme1n1 : 1.01 9734.70 38.03 0.00 0.00 13089.26 7136.14 18058.81 00:08:29.322 [2024-10-08T16:15:47.609Z] =================================================================================================================== 00:08:29.322 [2024-10-08T16:15:47.609Z] Total : 9734.70 38.03 0.00 0.00 13089.26 7136.14 18058.81 00:08:29.322 6988.00 IOPS, 27.30 MiB/s 00:08:29.322 Latency(us) 00:08:29.322 [2024-10-08T16:15:47.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.322 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:29.322 Nvme1n1 : 1.01 7051.18 27.54 0.00 0.00 18056.29 5024.43 26602.76 00:08:29.322 [2024-10-08T16:15:47.609Z] =================================================================================================================== 00:08:29.322 [2024-10-08T16:15:47.609Z] Total : 7051.18 27.54 0.00 0.00 18056.29 5024.43 26602.76 00:08:29.322 9499.00 IOPS, 37.11 MiB/s 00:08:29.322 Latency(us) 00:08:29.322 [2024-10-08T16:15:47.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.322 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:29.322 Nvme1n1 : 1.01 9573.39 37.40 0.00 0.00 13322.55 4636.07 22039.51 00:08:29.322 [2024-10-08T16:15:47.609Z] =================================================================================================================== 00:08:29.322 [2024-10-08T16:15:47.609Z] Total : 9573.39 37.40 0.00 0.00 13322.55 4636.07 22039.51 00:08:29.322 193880.00 IOPS, 757.34 MiB/s 00:08:29.322 Latency(us) 00:08:29.322 [2024-10-08T16:15:47.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.322 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:29.322 Nvme1n1 : 1.00 193507.37 755.89 0.00 0.00 657.91 318.58 1893.26 00:08:29.322 [2024-10-08T16:15:47.609Z] =================================================================================================================== 00:08:29.322 [2024-10-08T16:15:47.609Z] Total : 193507.37 755.89 0.00 0.00 657.91 318.58 1893.26 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1161914 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1161917 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1161920 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.580 rmmod nvme_tcp 00:08:29.580 rmmod nvme_fabrics 00:08:29.580 rmmod nvme_keyring 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1161870 ']' 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1161870 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1161870 ']' 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1161870 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.580 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1161870 00:08:29.838 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.838 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.838 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1161870' 00:08:29.838 killing process with pid 1161870 00:08:29.838 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1161870 00:08:29.838 18:15:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1161870 00:08:29.838 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:29.838 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:29.838 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:29.838 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:29.838 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:30.096 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:30.096 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:30.096 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:30.096 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:30.096 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.096 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.096 18:15:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:31.998 00:08:31.998 real 0m7.587s 00:08:31.998 user 0m17.416s 00:08:31.998 sys 0m3.961s 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.998 ************************************ 00:08:31.998 END TEST nvmf_bdev_io_wait 00:08:31.998 ************************************ 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.998 ************************************ 00:08:31.998 START TEST nvmf_queue_depth 00:08:31.998 ************************************ 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:31.998 * Looking for test storage... 00:08:31.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:08:31.998 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.257 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:32.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.258 --rc genhtml_branch_coverage=1 00:08:32.258 --rc genhtml_function_coverage=1 00:08:32.258 --rc genhtml_legend=1 00:08:32.258 --rc geninfo_all_blocks=1 00:08:32.258 --rc geninfo_unexecuted_blocks=1 00:08:32.258 00:08:32.258 ' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:32.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.258 --rc genhtml_branch_coverage=1 00:08:32.258 --rc genhtml_function_coverage=1 00:08:32.258 --rc genhtml_legend=1 00:08:32.258 --rc geninfo_all_blocks=1 00:08:32.258 --rc geninfo_unexecuted_blocks=1 00:08:32.258 00:08:32.258 ' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:32.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.258 --rc genhtml_branch_coverage=1 00:08:32.258 --rc genhtml_function_coverage=1 00:08:32.258 --rc genhtml_legend=1 00:08:32.258 --rc geninfo_all_blocks=1 00:08:32.258 --rc geninfo_unexecuted_blocks=1 00:08:32.258 00:08:32.258 ' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:32.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.258 --rc genhtml_branch_coverage=1 00:08:32.258 --rc genhtml_function_coverage=1 00:08:32.258 --rc genhtml_legend=1 00:08:32.258 --rc geninfo_all_blocks=1 00:08:32.258 --rc geninfo_unexecuted_blocks=1 00:08:32.258 00:08:32.258 ' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:32.258 18:15:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:34.788 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:34.788 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:34.788 Found net devices under 0000:09:00.0: cvl_0_0 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:34.788 Found net devices under 0000:09:00.1: cvl_0_1 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.788 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:08:34.789 00:08:34.789 --- 10.0.0.2 ping statistics --- 00:08:34.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.789 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:08:34.789 00:08:34.789 --- 10.0.0.1 ping statistics --- 00:08:34.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.789 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1164266 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1164266 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1164266 ']' 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.789 18:15:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:34.789 [2024-10-08 18:15:52.809056] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:34.789 [2024-10-08 18:15:52.809162] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.789 [2024-10-08 18:15:52.876144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.789 [2024-10-08 18:15:52.977696] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.789 [2024-10-08 18:15:52.977748] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.789 [2024-10-08 18:15:52.977771] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.789 [2024-10-08 18:15:52.977782] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.789 [2024-10-08 18:15:52.977791] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.789 [2024-10-08 18:15:52.978405] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.047 [2024-10-08 18:15:53.127135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.047 Malloc0 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.047 [2024-10-08 18:15:53.188172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1164293 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1164293 /var/tmp/bdevperf.sock 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1164293 ']' 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:35.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.047 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.047 [2024-10-08 18:15:53.235320] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:35.047 [2024-10-08 18:15:53.235384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164293 ] 00:08:35.047 [2024-10-08 18:15:53.292756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.305 [2024-10-08 18:15:53.400889] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.305 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.305 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:35.305 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:35.305 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.305 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:35.563 NVMe0n1 00:08:35.563 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.563 18:15:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:35.563 Running I/O for 10 seconds... 00:08:37.867 8192.00 IOPS, 32.00 MiB/s [2024-10-08T16:15:57.087Z] 8200.50 IOPS, 32.03 MiB/s [2024-10-08T16:15:58.018Z] 8363.00 IOPS, 32.67 MiB/s [2024-10-08T16:15:58.950Z] 8439.00 IOPS, 32.96 MiB/s [2024-10-08T16:15:59.883Z] 8395.20 IOPS, 32.79 MiB/s [2024-10-08T16:16:01.254Z] 8439.50 IOPS, 32.97 MiB/s [2024-10-08T16:16:02.186Z] 8474.00 IOPS, 33.10 MiB/s [2024-10-08T16:16:03.120Z] 8444.00 IOPS, 32.98 MiB/s [2024-10-08T16:16:04.054Z] 8487.78 IOPS, 33.16 MiB/s [2024-10-08T16:16:04.054Z] 8489.30 IOPS, 33.16 MiB/s 00:08:45.767 Latency(us) 00:08:45.767 [2024-10-08T16:16:04.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.767 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:45.767 Verification LBA range: start 0x0 length 0x4000 00:08:45.767 NVMe0n1 : 10.08 8519.84 33.28 0.00 0.00 119714.64 21845.33 70293.43 00:08:45.767 [2024-10-08T16:16:04.054Z] =================================================================================================================== 00:08:45.767 [2024-10-08T16:16:04.054Z] Total : 8519.84 33.28 0.00 0.00 119714.64 21845.33 70293.43 00:08:45.767 { 00:08:45.767 "results": [ 00:08:45.767 { 00:08:45.767 "job": "NVMe0n1", 00:08:45.767 "core_mask": "0x1", 00:08:45.767 "workload": "verify", 00:08:45.767 "status": "finished", 00:08:45.767 "verify_range": { 00:08:45.767 "start": 0, 00:08:45.767 "length": 16384 00:08:45.767 }, 00:08:45.767 "queue_depth": 1024, 00:08:45.767 "io_size": 4096, 00:08:45.767 "runtime": 10.084343, 00:08:45.767 "iops": 8519.841104175057, 00:08:45.767 "mibps": 33.280629313183816, 00:08:45.767 "io_failed": 0, 00:08:45.767 "io_timeout": 0, 00:08:45.767 "avg_latency_us": 119714.63760640653, 00:08:45.767 "min_latency_us": 21845.333333333332, 00:08:45.767 "max_latency_us": 70293.42814814814 00:08:45.767 } 00:08:45.767 ], 00:08:45.767 "core_count": 1 00:08:45.767 } 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1164293 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1164293 ']' 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1164293 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1164293 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1164293' 00:08:45.767 killing process with pid 1164293 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1164293 00:08:45.767 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.767 00:08:45.767 Latency(us) 00:08:45.767 [2024-10-08T16:16:04.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.767 [2024-10-08T16:16:04.054Z] =================================================================================================================== 00:08:45.767 [2024-10-08T16:16:04.054Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.767 18:16:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1164293 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:46.025 rmmod nvme_tcp 00:08:46.025 rmmod nvme_fabrics 00:08:46.025 rmmod nvme_keyring 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1164266 ']' 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1164266 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1164266 ']' 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1164266 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:46.025 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.283 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1164266 00:08:46.283 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:46.283 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:46.283 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1164266' 00:08:46.283 killing process with pid 1164266 00:08:46.283 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1164266 00:08:46.283 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1164266 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.543 18:16:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.451 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:48.451 00:08:48.451 real 0m16.483s 00:08:48.451 user 0m22.960s 00:08:48.451 sys 0m3.280s 00:08:48.451 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.451 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.451 ************************************ 00:08:48.451 END TEST nvmf_queue_depth 00:08:48.451 ************************************ 00:08:48.451 18:16:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:48.451 18:16:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:48.451 18:16:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.451 18:16:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.710 ************************************ 00:08:48.710 START TEST nvmf_target_multipath 00:08:48.710 ************************************ 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:48.710 * Looking for test storage... 00:08:48.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:48.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.710 --rc genhtml_branch_coverage=1 00:08:48.710 --rc genhtml_function_coverage=1 00:08:48.710 --rc genhtml_legend=1 00:08:48.710 --rc geninfo_all_blocks=1 00:08:48.710 --rc geninfo_unexecuted_blocks=1 00:08:48.710 00:08:48.710 ' 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:48.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.710 --rc genhtml_branch_coverage=1 00:08:48.710 --rc genhtml_function_coverage=1 00:08:48.710 --rc genhtml_legend=1 00:08:48.710 --rc geninfo_all_blocks=1 00:08:48.710 --rc geninfo_unexecuted_blocks=1 00:08:48.710 00:08:48.710 ' 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:48.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.710 --rc genhtml_branch_coverage=1 00:08:48.710 --rc genhtml_function_coverage=1 00:08:48.710 --rc genhtml_legend=1 00:08:48.710 --rc geninfo_all_blocks=1 00:08:48.710 --rc geninfo_unexecuted_blocks=1 00:08:48.710 00:08:48.710 ' 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:48.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.710 --rc genhtml_branch_coverage=1 00:08:48.710 --rc genhtml_function_coverage=1 00:08:48.710 --rc genhtml_legend=1 00:08:48.710 --rc geninfo_all_blocks=1 00:08:48.710 --rc geninfo_unexecuted_blocks=1 00:08:48.710 00:08:48.710 ' 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.710 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:48.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:48.711 18:16:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:51.242 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:51.242 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:51.242 Found net devices under 0000:09:00.0: cvl_0_0 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:51.242 Found net devices under 0000:09:00.1: cvl_0_1 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.242 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:51.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:08:51.243 00:08:51.243 --- 10.0.0.2 ping statistics --- 00:08:51.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.243 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:08:51.243 00:08:51.243 --- 10.0.0.1 ping statistics --- 00:08:51.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.243 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:51.243 only one NIC for nvmf test 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.243 rmmod nvme_tcp 00:08:51.243 rmmod nvme_fabrics 00:08:51.243 rmmod nvme_keyring 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.243 18:16:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.150 00:08:53.150 real 0m4.668s 00:08:53.150 user 0m0.977s 00:08:53.150 sys 0m1.697s 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.150 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:53.150 ************************************ 00:08:53.150 END TEST nvmf_target_multipath 00:08:53.150 ************************************ 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.409 ************************************ 00:08:53.409 START TEST nvmf_zcopy 00:08:53.409 ************************************ 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:53.409 * Looking for test storage... 00:08:53.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:53.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.409 --rc genhtml_branch_coverage=1 00:08:53.409 --rc genhtml_function_coverage=1 00:08:53.409 --rc genhtml_legend=1 00:08:53.409 --rc geninfo_all_blocks=1 00:08:53.409 --rc geninfo_unexecuted_blocks=1 00:08:53.409 00:08:53.409 ' 00:08:53.409 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:53.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.409 --rc genhtml_branch_coverage=1 00:08:53.409 --rc genhtml_function_coverage=1 00:08:53.409 --rc genhtml_legend=1 00:08:53.409 --rc geninfo_all_blocks=1 00:08:53.410 --rc geninfo_unexecuted_blocks=1 00:08:53.410 00:08:53.410 ' 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:53.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.410 --rc genhtml_branch_coverage=1 00:08:53.410 --rc genhtml_function_coverage=1 00:08:53.410 --rc genhtml_legend=1 00:08:53.410 --rc geninfo_all_blocks=1 00:08:53.410 --rc geninfo_unexecuted_blocks=1 00:08:53.410 00:08:53.410 ' 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:53.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.410 --rc genhtml_branch_coverage=1 00:08:53.410 --rc genhtml_function_coverage=1 00:08:53.410 --rc genhtml_legend=1 00:08:53.410 --rc geninfo_all_blocks=1 00:08:53.410 --rc geninfo_unexecuted_blocks=1 00:08:53.410 00:08:53.410 ' 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.410 18:16:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:55.943 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:55.943 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:55.943 Found net devices under 0000:09:00.0: cvl_0_0 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:55.943 Found net devices under 0000:09:00.1: cvl_0_1 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.943 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:55.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:08:55.944 00:08:55.944 --- 10.0.0.2 ping statistics --- 00:08:55.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.944 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:08:55.944 00:08:55.944 --- 10.0.0.1 ping statistics --- 00:08:55.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.944 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1169500 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1169500 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1169500 ']' 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.944 18:16:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:55.944 [2024-10-08 18:16:14.013053] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:55.944 [2024-10-08 18:16:14.013131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.944 [2024-10-08 18:16:14.076234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.944 [2024-10-08 18:16:14.180138] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.944 [2024-10-08 18:16:14.180200] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.944 [2024-10-08 18:16:14.180228] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.944 [2024-10-08 18:16:14.180239] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.944 [2024-10-08 18:16:14.180248] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.944 [2024-10-08 18:16:14.180833] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.202 [2024-10-08 18:16:14.325714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:56.202 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.203 [2024-10-08 18:16:14.341971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.203 malloc0 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:56.203 { 00:08:56.203 "params": { 00:08:56.203 "name": "Nvme$subsystem", 00:08:56.203 "trtype": "$TEST_TRANSPORT", 00:08:56.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.203 "adrfam": "ipv4", 00:08:56.203 "trsvcid": "$NVMF_PORT", 00:08:56.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.203 "hdgst": ${hdgst:-false}, 00:08:56.203 "ddgst": ${ddgst:-false} 00:08:56.203 }, 00:08:56.203 "method": "bdev_nvme_attach_controller" 00:08:56.203 } 00:08:56.203 EOF 00:08:56.203 )") 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:08:56.203 18:16:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:56.203 "params": { 00:08:56.203 "name": "Nvme1", 00:08:56.203 "trtype": "tcp", 00:08:56.203 "traddr": "10.0.0.2", 00:08:56.203 "adrfam": "ipv4", 00:08:56.203 "trsvcid": "4420", 00:08:56.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:56.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:56.203 "hdgst": false, 00:08:56.203 "ddgst": false 00:08:56.203 }, 00:08:56.203 "method": "bdev_nvme_attach_controller" 00:08:56.203 }' 00:08:56.203 [2024-10-08 18:16:14.440108] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:08:56.203 [2024-10-08 18:16:14.440185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169637 ] 00:08:56.461 [2024-10-08 18:16:14.499410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.461 [2024-10-08 18:16:14.613394] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.719 Running I/O for 10 seconds... 00:08:58.641 5694.00 IOPS, 44.48 MiB/s [2024-10-08T16:16:17.886Z] 5741.50 IOPS, 44.86 MiB/s [2024-10-08T16:16:19.258Z] 5760.33 IOPS, 45.00 MiB/s [2024-10-08T16:16:19.823Z] 5764.25 IOPS, 45.03 MiB/s [2024-10-08T16:16:21.196Z] 5777.00 IOPS, 45.13 MiB/s [2024-10-08T16:16:22.130Z] 5780.50 IOPS, 45.16 MiB/s [2024-10-08T16:16:23.063Z] 5792.29 IOPS, 45.25 MiB/s [2024-10-08T16:16:23.997Z] 5793.88 IOPS, 45.26 MiB/s [2024-10-08T16:16:24.930Z] 5798.33 IOPS, 45.30 MiB/s [2024-10-08T16:16:24.930Z] 5801.20 IOPS, 45.32 MiB/s 00:09:06.643 Latency(us) 00:09:06.643 [2024-10-08T16:16:24.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.643 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:06.643 Verification LBA range: start 0x0 length 0x1000 00:09:06.643 Nvme1n1 : 10.01 5801.08 45.32 0.00 0.00 22004.79 512.76 31263.10 00:09:06.643 [2024-10-08T16:16:24.930Z] =================================================================================================================== 00:09:06.643 [2024-10-08T16:16:24.930Z] Total : 5801.08 45.32 0.00 0.00 22004.79 512.76 31263.10 00:09:06.901 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1170848 00:09:06.901 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:06.901 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:06.901 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:06.902 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:06.902 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:06.902 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:06.902 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:06.902 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:06.902 { 00:09:06.902 "params": { 00:09:06.902 "name": "Nvme$subsystem", 00:09:06.902 "trtype": "$TEST_TRANSPORT", 00:09:06.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.902 "adrfam": "ipv4", 00:09:06.902 "trsvcid": "$NVMF_PORT", 00:09:06.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.902 "hdgst": ${hdgst:-false}, 00:09:06.902 "ddgst": ${ddgst:-false} 00:09:06.902 }, 00:09:06.902 "method": "bdev_nvme_attach_controller" 00:09:06.902 } 00:09:06.902 EOF 00:09:06.902 )") 00:09:06.902 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:06.902 [2024-10-08 18:16:25.122860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.902 [2024-10-08 18:16:25.122920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.902 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:06.902 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:06.902 18:16:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:06.902 "params": { 00:09:06.902 "name": "Nvme1", 00:09:06.902 "trtype": "tcp", 00:09:06.902 "traddr": "10.0.0.2", 00:09:06.902 "adrfam": "ipv4", 00:09:06.902 "trsvcid": "4420", 00:09:06.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.902 "hdgst": false, 00:09:06.902 "ddgst": false 00:09:06.902 }, 00:09:06.902 "method": "bdev_nvme_attach_controller" 00:09:06.902 }' 00:09:06.902 [2024-10-08 18:16:25.130814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.902 [2024-10-08 18:16:25.130837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.902 [2024-10-08 18:16:25.138814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.902 [2024-10-08 18:16:25.138833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.902 [2024-10-08 18:16:25.146844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.902 [2024-10-08 18:16:25.146865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.902 [2024-10-08 18:16:25.154862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.902 [2024-10-08 18:16:25.154883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.902 [2024-10-08 18:16:25.162897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.902 [2024-10-08 18:16:25.162918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.902 [2024-10-08 18:16:25.164134] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:09:06.902 [2024-10-08 18:16:25.164208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170848 ] 00:09:06.902 [2024-10-08 18:16:25.170902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.902 [2024-10-08 18:16:25.170921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.902 [2024-10-08 18:16:25.178925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.902 [2024-10-08 18:16:25.178944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:06.902 [2024-10-08 18:16:25.186955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:06.902 [2024-10-08 18:16:25.186977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.194981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.195001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.203004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.203024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.211026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.211045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.219049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.219068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.227072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.227105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.227586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.161 [2024-10-08 18:16:25.235119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.235149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.243152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.243185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.251137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.251157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.259157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.259177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.267178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.267197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.275199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.275218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.283222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.283241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.291279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.291316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.299317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.299366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.307309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.307330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.315329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.315369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.323357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.323377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.331365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.331385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.339388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.339408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.343022] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.161 [2024-10-08 18:16:25.347411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.347431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.355436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.355456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.363484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.363516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.371509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.371554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.379531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.379596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.387585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.387619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.395583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.395645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.403623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.403673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.411606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.411627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.419672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.419706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.427696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.427728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.435703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.435734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.161 [2024-10-08 18:16:25.443689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.161 [2024-10-08 18:16:25.443720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.451736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.451760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.459733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.459759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.467757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.467780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.475776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.475798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.483797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.483831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.491842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.491864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.499839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.499861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.507858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.507878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.515880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.515900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.523903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.523923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.531925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.531944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.539951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.539974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.547969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.547988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.555990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.556010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.564013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.564032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.420 [2024-10-08 18:16:25.572034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.420 [2024-10-08 18:16:25.572054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.580059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.580079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.588082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.588103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.596113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.596133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.604127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.604146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.612150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.612175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.620172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.620192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.628195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.628216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.636214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.636233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.644245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.644283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 Running I/O for 5 seconds... 00:09:07.421 [2024-10-08 18:16:25.652265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.652310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.667756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.667785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.679431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.679462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.691550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.691598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.421 [2024-10-08 18:16:25.703706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.421 [2024-10-08 18:16:25.703739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.679 [2024-10-08 18:16:25.715708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.679 [2024-10-08 18:16:25.715737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.679 [2024-10-08 18:16:25.727887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.679 [2024-10-08 18:16:25.727916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.679 [2024-10-08 18:16:25.738981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.679 [2024-10-08 18:16:25.739009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.679 [2024-10-08 18:16:25.750785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.679 [2024-10-08 18:16:25.750814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.679 [2024-10-08 18:16:25.762492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.679 [2024-10-08 18:16:25.762521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.774205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.774233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.786053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.786081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.797832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.797860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.809519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.809548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.821094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.821130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.832578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.832606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.843919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.843947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.855092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.855120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.867090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.867118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.878760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.878788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.890922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.890949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.902900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.902927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.915097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.915126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.926233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.926262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.937561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.937605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.949387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.949416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.680 [2024-10-08 18:16:25.960749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.680 [2024-10-08 18:16:25.960777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.938 [2024-10-08 18:16:25.974390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.938 [2024-10-08 18:16:25.974419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.938 [2024-10-08 18:16:25.985026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.938 [2024-10-08 18:16:25.985055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.938 [2024-10-08 18:16:25.996528] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.938 [2024-10-08 18:16:25.996557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.938 [2024-10-08 18:16:26.007874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.938 [2024-10-08 18:16:26.007902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.938 [2024-10-08 18:16:26.019720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.938 [2024-10-08 18:16:26.019748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.938 [2024-10-08 18:16:26.031860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.938 [2024-10-08 18:16:26.031889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.938 [2024-10-08 18:16:26.043168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.938 [2024-10-08 18:16:26.043208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.938 [2024-10-08 18:16:26.055028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.938 [2024-10-08 18:16:26.055056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.938 [2024-10-08 18:16:26.066995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.067024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.077936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.077964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.089958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.089986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.101646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.101674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.112817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.112845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.124854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.124886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.136914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.136942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.148133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.148160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.160085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.160113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.171680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.171708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.183748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.183776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.195499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.195528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.207201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.207228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:07.939 [2024-10-08 18:16:26.218871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:07.939 [2024-10-08 18:16:26.218899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.197 [2024-10-08 18:16:26.230057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.197 [2024-10-08 18:16:26.230085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.241602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.241630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.253526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.253555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.267037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.267064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.278222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.278250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.289678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.289706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.301192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.301220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.312808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.312836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.324070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.324097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.335452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.335480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.346952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.346980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.359088] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.359116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.370734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.370762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.382620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.382648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.394566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.394610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.405250] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.405301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.416372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.416401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.427914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.427942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.439371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.439399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.453251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.453303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.463779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.463807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.198 [2024-10-08 18:16:26.475243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.198 [2024-10-08 18:16:26.475285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.487137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.487165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.500168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.500197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.509987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.510015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.522299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.522328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.535720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.535750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.546989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.547018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.557866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.557895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.569907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.569936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.581464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.581493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.592491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.592520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.604200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.604228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.615702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.615730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.629253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.629307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.640026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.640054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.651380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.651408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 10834.00 IOPS, 84.64 MiB/s [2024-10-08T16:16:26.744Z] [2024-10-08 18:16:26.663144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.663172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.676453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.676482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.687181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.457 [2024-10-08 18:16:26.687209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.457 [2024-10-08 18:16:26.698809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.458 [2024-10-08 18:16:26.698870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.458 [2024-10-08 18:16:26.710772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.458 [2024-10-08 18:16:26.710800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.458 [2024-10-08 18:16:26.722853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.458 [2024-10-08 18:16:26.722880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.458 [2024-10-08 18:16:26.734643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.458 [2024-10-08 18:16:26.734685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.746058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.746086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.757389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.757418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.769222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.769249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.780788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.780815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.792613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.792641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.804865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.804893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.816663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.816690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.828364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.828392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.840258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.840309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.851718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.851760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.863387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.863415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.874867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.874895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.886038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.886066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.897244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.897297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.908770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.908798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.919647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.919699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.930671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.930698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.943512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.943541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.954360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.954388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.965345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.965373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.976909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.976937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.988300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.988329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.716 [2024-10-08 18:16:26.999595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.716 [2024-10-08 18:16:26.999625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.010718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.010747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.023817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.023846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.035076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.035105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.046888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.046916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.058978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.059008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.072488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.072516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.083451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.083489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.095148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.095192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.106748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.106776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.119304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.119332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.130546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.130575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.141691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.141730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.154654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.975 [2024-10-08 18:16:27.154682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.975 [2024-10-08 18:16:27.165365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.976 [2024-10-08 18:16:27.165394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.976 [2024-10-08 18:16:27.177062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.976 [2024-10-08 18:16:27.177091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.976 [2024-10-08 18:16:27.188747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.976 [2024-10-08 18:16:27.188776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.976 [2024-10-08 18:16:27.200005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.976 [2024-10-08 18:16:27.200033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.976 [2024-10-08 18:16:27.211446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.976 [2024-10-08 18:16:27.211475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.976 [2024-10-08 18:16:27.223106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.976 [2024-10-08 18:16:27.223135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.976 [2024-10-08 18:16:27.234357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.976 [2024-10-08 18:16:27.234387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.976 [2024-10-08 18:16:27.245598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.976 [2024-10-08 18:16:27.245627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:08.976 [2024-10-08 18:16:27.256923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:08.976 [2024-10-08 18:16:27.256950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.268437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.268467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.279752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.279796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.291349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.291378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.302506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.302535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.316123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.316150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.326926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.326954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.338338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.338366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.349557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.349585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.360756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.360810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.374299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.374328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.385220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.385248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.396851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.396893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.408365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.408393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.420051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.420093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.431538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.431567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.442800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.442827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.453902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.453930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.465145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.465173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.478819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.478847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.490017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.490045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.501035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.501076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.234 [2024-10-08 18:16:27.512641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.234 [2024-10-08 18:16:27.512670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.523858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.523887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.535113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.535141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.546364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.546392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.558189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.558216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.569602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.569630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.581449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.581486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.593719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.593747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.605361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.605390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.617039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.617067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.628674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.628703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.640122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.640150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.651954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.651983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 10945.50 IOPS, 85.51 MiB/s [2024-10-08T16:16:27.779Z] [2024-10-08 18:16:27.663519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.663548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.675283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.675321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.688696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.688725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.699207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.699235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.710393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.710423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.722054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.492 [2024-10-08 18:16:27.722083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.492 [2024-10-08 18:16:27.733868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.493 [2024-10-08 18:16:27.733897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.493 [2024-10-08 18:16:27.745376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.493 [2024-10-08 18:16:27.745405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.493 [2024-10-08 18:16:27.757033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.493 [2024-10-08 18:16:27.757062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.493 [2024-10-08 18:16:27.768172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.493 [2024-10-08 18:16:27.768202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.493 [2024-10-08 18:16:27.779734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.493 [2024-10-08 18:16:27.779762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.791517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.791546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.803066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.803093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.814881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.814909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.828013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.828041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.838481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.838510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.850366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.850394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.862071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.862099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.873680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.873707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.887504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.887532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.898951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.898978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.911575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.911618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.921705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.921733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.933002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.933029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.946543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.946572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.957037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.957065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.968746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.968773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.979996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.980024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:27.990946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:27.990973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:28.002639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:28.002667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:28.013506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:28.013534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:28.025029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:28.025057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:09.751 [2024-10-08 18:16:28.036270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:09.751 [2024-10-08 18:16:28.036308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.047503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.047532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.058682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.058710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.069991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.070020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.081801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.081828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.093296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.093324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.106718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.106745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.117589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.117632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.128822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.128864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.140167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.140195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.151541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.151569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.165071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.165112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.176114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.176141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.188138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.188165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.199735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.199762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.211341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.211369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.222960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.222988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.234507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.234543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.246373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.246401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.258191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.258218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.269662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.269690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.282762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.282790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.009 [2024-10-08 18:16:28.293412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.009 [2024-10-08 18:16:28.293441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.305057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.305085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.316644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.316673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.330652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.330680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.341899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.341926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.353339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.353368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.364778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.364806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.376118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.376146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.387475] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.387503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.398944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.398971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.410853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.410881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.422119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.422147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.433938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.433966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.445474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.445503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.456871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.456906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.468665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.468693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.479871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.479898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.491363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.491392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.502942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.502970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.516257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.516309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.527184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.527211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.538638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.538681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.266 [2024-10-08 18:16:28.550185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.266 [2024-10-08 18:16:28.550213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.523 [2024-10-08 18:16:28.561976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.523 [2024-10-08 18:16:28.562004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.523 [2024-10-08 18:16:28.573067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.523 [2024-10-08 18:16:28.573094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.523 [2024-10-08 18:16:28.585201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.523 [2024-10-08 18:16:28.585228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.523 [2024-10-08 18:16:28.596793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.523 [2024-10-08 18:16:28.596821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.608205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.608248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.619632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.619660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.633044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.633072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.643398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.643426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.655645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.655674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 10993.00 IOPS, 85.88 MiB/s [2024-10-08T16:16:28.811Z] [2024-10-08 18:16:28.666633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.666661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.678565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.678617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.690677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.690705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.702298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.702326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.714081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.714110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.725819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.725847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.737655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.737683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.749725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.749753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.761244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.761295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.774609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.774637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.784300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.784329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.796464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.796494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.524 [2024-10-08 18:16:28.808310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.524 [2024-10-08 18:16:28.808340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.819651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.819679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.831122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.831150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.842488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.842517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.854275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.854328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.866211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.866239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.880008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.880038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.891561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.891605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.905378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.905407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.916976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.917004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.930483] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.930512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.941411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.941439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.953468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.953496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.964673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.964701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.977736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.977764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:28.988075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:28.988103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:29.000155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:29.000182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:29.011822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:29.011849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:29.023082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:29.023111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:29.034281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:29.034321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:29.045550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:29.045579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.782 [2024-10-08 18:16:29.058928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.782 [2024-10-08 18:16:29.058956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.040 [2024-10-08 18:16:29.069642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.040 [2024-10-08 18:16:29.069671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.040 [2024-10-08 18:16:29.080939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.040 [2024-10-08 18:16:29.080966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.040 [2024-10-08 18:16:29.092200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.040 [2024-10-08 18:16:29.092228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.040 [2024-10-08 18:16:29.103688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.040 [2024-10-08 18:16:29.103716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.115196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.115224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.127138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.127166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.138833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.138861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.150523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.150552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.161771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.161799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.175239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.175282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.186011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.186039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.198064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.198092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.209609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.209636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.220789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.220817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.232190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.232218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.243826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.243854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.255703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.255731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.267762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.267789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.281422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.281451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.292225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.292253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.303741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.303768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.315095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.315123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.041 [2024-10-08 18:16:29.326747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.041 [2024-10-08 18:16:29.326776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.339226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.339253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.352740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.352768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.363828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.363855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.375653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.375681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.387499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.387528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.401095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.401122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.412357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.412386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.423978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.424006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.435154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.435182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.447079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.447106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.458956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.458983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.470636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.470665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.483823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.483853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.494748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.494777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.506407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.506435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.519554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.519583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.530568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.530597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.542590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.542619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.554063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.554092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.567588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.567617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.299 [2024-10-08 18:16:29.578635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.299 [2024-10-08 18:16:29.578664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.557 [2024-10-08 18:16:29.590844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.557 [2024-10-08 18:16:29.590873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.557 [2024-10-08 18:16:29.603010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.557 [2024-10-08 18:16:29.603038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.557 [2024-10-08 18:16:29.614616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.557 [2024-10-08 18:16:29.614644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.557 [2024-10-08 18:16:29.627520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.627548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.637533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.637561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.649810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.649838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 10990.50 IOPS, 85.86 MiB/s [2024-10-08T16:16:29.845Z] [2024-10-08 18:16:29.661351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.661380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.672793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.672822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.684247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.684275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.697299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.697327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.707967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.707995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.719036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.719065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.730496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.730525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.741854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.741882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.753361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.753390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.764903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.764930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.776757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.776799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.788735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.788770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.800357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.800385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.811861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.811888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.823375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.823403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.558 [2024-10-08 18:16:29.834366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.558 [2024-10-08 18:16:29.834395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.847897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.847927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.858611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.858639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.869845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.869887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.881411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.881439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.892872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.892900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.906046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.906073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.916499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.916528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.928078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.928105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.939583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.939611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.950823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.950852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.962122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.962150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.973541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.973581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.985206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.985234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:29.996227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:29.996255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:30.007666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:30.007704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:30.019383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:30.019421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:30.031306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:30.031335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:30.043339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:30.043369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:30.055238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:30.055267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.816 [2024-10-08 18:16:30.067717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.816 [2024-10-08 18:16:30.067746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.817 [2024-10-08 18:16:30.079990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.817 [2024-10-08 18:16:30.080019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.817 [2024-10-08 18:16:30.092251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.817 [2024-10-08 18:16:30.092302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.103846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.103878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.115856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.115885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.127783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.127811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.139360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.139389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.151576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.151605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.163373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.163402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.175430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.175458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.187069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.187096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.199053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.199081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.210678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.210705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.222501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.222530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.234366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.234405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.245923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.245951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.257922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.257951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.269818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.269847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.281917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.281946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.293630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.293677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.305853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.305881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.317306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.317335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.328671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.328699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.075 [2024-10-08 18:16:30.341101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.075 [2024-10-08 18:16:30.341129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.076 [2024-10-08 18:16:30.353076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.076 [2024-10-08 18:16:30.353119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.364695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.364723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.378081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.378109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.388985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.389014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.400610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.400639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.412408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.412437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.425697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.425725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.435472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.435501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.447683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.447710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.458883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.458910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.470556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.470600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.482301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.482329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.494037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.494065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.505710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.505738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.334 [2024-10-08 18:16:30.517465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.334 [2024-10-08 18:16:30.517493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.335 [2024-10-08 18:16:30.529075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.335 [2024-10-08 18:16:30.529117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.335 [2024-10-08 18:16:30.542577] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.335 [2024-10-08 18:16:30.542606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.335 [2024-10-08 18:16:30.553440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.335 [2024-10-08 18:16:30.553468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.335 [2024-10-08 18:16:30.564803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.335 [2024-10-08 18:16:30.564831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.335 [2024-10-08 18:16:30.576112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.335 [2024-10-08 18:16:30.576140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.335 [2024-10-08 18:16:30.587363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.335 [2024-10-08 18:16:30.587403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.335 [2024-10-08 18:16:30.599122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.335 [2024-10-08 18:16:30.599151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.335 [2024-10-08 18:16:30.610688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.335 [2024-10-08 18:16:30.610717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.593 [2024-10-08 18:16:30.626210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.593 [2024-10-08 18:16:30.626241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.593 [2024-10-08 18:16:30.637245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.593 [2024-10-08 18:16:30.637274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.593 [2024-10-08 18:16:30.649166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.593 [2024-10-08 18:16:30.649194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.593 [2024-10-08 18:16:30.661106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.593 [2024-10-08 18:16:30.661135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 10980.00 IOPS, 85.78 MiB/s [2024-10-08T16:16:30.881Z] [2024-10-08 18:16:30.671239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.671266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 00:09:12.594 Latency(us) 00:09:12.594 [2024-10-08T16:16:30.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.594 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:12.594 Nvme1n1 : 5.01 10983.80 85.81 0.00 0.00 11637.93 4757.43 25437.68 00:09:12.594 [2024-10-08T16:16:30.881Z] =================================================================================================================== 00:09:12.594 [2024-10-08T16:16:30.881Z] Total : 10983.80 85.81 0.00 0.00 11637.93 4757.43 25437.68 00:09:12.594 [2024-10-08 18:16:30.677408] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.677436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.685440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.685466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.693456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.693479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.701543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.701588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.709563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.709609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.717587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.717634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.725601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.725645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.733616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.733661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.741673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.741719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.749668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.749712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.757692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.757736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.765714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.765758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.773734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.773779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.781759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.781801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.789780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.789825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.797794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.797853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.805817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.805859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.813780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.813799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.821801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.821821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.829821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.829840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.837844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.837863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.845917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.845956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.853949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.853993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.861948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.861984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.869933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.869953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.594 [2024-10-08 18:16:30.877956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.594 [2024-10-08 18:16:30.877976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.852 [2024-10-08 18:16:30.885979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.852 [2024-10-08 18:16:30.886000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.852 [2024-10-08 18:16:30.894007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.852 [2024-10-08 18:16:30.894028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.852 [2024-10-08 18:16:30.902094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.852 [2024-10-08 18:16:30.902139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.852 [2024-10-08 18:16:30.910104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.852 [2024-10-08 18:16:30.910146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.852 [2024-10-08 18:16:30.918065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.852 [2024-10-08 18:16:30.918085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.852 [2024-10-08 18:16:30.926086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.852 [2024-10-08 18:16:30.926106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.852 [2024-10-08 18:16:30.934107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.852 [2024-10-08 18:16:30.934126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1170848) - No such process 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1170848 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.852 delay0 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.852 18:16:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:12.852 [2024-10-08 18:16:31.008169] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:20.963 Initializing NVMe Controllers 00:09:20.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:20.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:20.963 Initialization complete. Launching workers. 00:09:20.963 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 258, failed: 17188 00:09:20.963 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17341, failed to submit 105 00:09:20.963 success 17250, unsuccessful 91, failed 0 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.963 rmmod nvme_tcp 00:09:20.963 rmmod nvme_fabrics 00:09:20.963 rmmod nvme_keyring 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1169500 ']' 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1169500 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1169500 ']' 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1169500 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1169500 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1169500' 00:09:20.963 killing process with pid 1169500 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1169500 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1169500 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.963 18:16:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.341 00:09:22.341 real 0m29.017s 00:09:22.341 user 0m42.368s 00:09:22.341 sys 0m8.971s 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:22.341 ************************************ 00:09:22.341 END TEST nvmf_zcopy 00:09:22.341 ************************************ 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.341 ************************************ 00:09:22.341 START TEST nvmf_nmic 00:09:22.341 ************************************ 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:22.341 * Looking for test storage... 00:09:22.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:09:22.341 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:22.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.601 --rc genhtml_branch_coverage=1 00:09:22.601 --rc genhtml_function_coverage=1 00:09:22.601 --rc genhtml_legend=1 00:09:22.601 --rc geninfo_all_blocks=1 00:09:22.601 --rc geninfo_unexecuted_blocks=1 00:09:22.601 00:09:22.601 ' 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:22.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.601 --rc genhtml_branch_coverage=1 00:09:22.601 --rc genhtml_function_coverage=1 00:09:22.601 --rc genhtml_legend=1 00:09:22.601 --rc geninfo_all_blocks=1 00:09:22.601 --rc geninfo_unexecuted_blocks=1 00:09:22.601 00:09:22.601 ' 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:22.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.601 --rc genhtml_branch_coverage=1 00:09:22.601 --rc genhtml_function_coverage=1 00:09:22.601 --rc genhtml_legend=1 00:09:22.601 --rc geninfo_all_blocks=1 00:09:22.601 --rc geninfo_unexecuted_blocks=1 00:09:22.601 00:09:22.601 ' 00:09:22.601 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:22.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.601 --rc genhtml_branch_coverage=1 00:09:22.601 --rc genhtml_function_coverage=1 00:09:22.601 --rc genhtml_legend=1 00:09:22.602 --rc geninfo_all_blocks=1 00:09:22.602 --rc geninfo_unexecuted_blocks=1 00:09:22.602 00:09:22.602 ' 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.602 18:16:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:25.139 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:25.139 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:25.139 Found net devices under 0000:09:00.0: cvl_0_0 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:25.139 Found net devices under 0000:09:00.1: cvl_0_1 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.139 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:25.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:09:25.140 00:09:25.140 --- 10.0.0.2 ping statistics --- 00:09:25.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.140 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:09:25.140 00:09:25.140 --- 10.0.0.1 ping statistics --- 00:09:25.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.140 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1174375 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1174375 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1174375 ']' 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:25.140 18:16:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.140 [2024-10-08 18:16:43.038018] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:09:25.140 [2024-10-08 18:16:43.038119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.140 [2024-10-08 18:16:43.106154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.140 [2024-10-08 18:16:43.218264] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.140 [2024-10-08 18:16:43.218348] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.140 [2024-10-08 18:16:43.218376] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.140 [2024-10-08 18:16:43.218394] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.140 [2024-10-08 18:16:43.218411] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.140 [2024-10-08 18:16:43.220264] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.140 [2024-10-08 18:16:43.220311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.140 [2024-10-08 18:16:43.220339] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.140 [2024-10-08 18:16:43.220343] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.140 [2024-10-08 18:16:43.382420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.140 Malloc0 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.140 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.398 [2024-10-08 18:16:43.434051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:25.398 test case1: single bdev can't be used in multiple subsystems 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.398 [2024-10-08 18:16:43.457903] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:25.398 [2024-10-08 18:16:43.457933] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:25.398 [2024-10-08 18:16:43.457956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.398 request: 00:09:25.398 { 00:09:25.398 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:25.398 "namespace": { 00:09:25.398 "bdev_name": "Malloc0", 00:09:25.398 "no_auto_visible": false 00:09:25.398 }, 00:09:25.398 "method": "nvmf_subsystem_add_ns", 00:09:25.398 "req_id": 1 00:09:25.398 } 00:09:25.398 Got JSON-RPC error response 00:09:25.398 response: 00:09:25.398 { 00:09:25.398 "code": -32602, 00:09:25.398 "message": "Invalid parameters" 00:09:25.398 } 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:25.398 Adding namespace failed - expected result. 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:25.398 test case2: host connect to nvmf target in multiple paths 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:25.398 [2024-10-08 18:16:43.466034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.398 18:16:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:25.964 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:26.529 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:26.529 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.529 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.529 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:26.529 18:16:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:29.054 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:29.054 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:29.054 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.054 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:29.054 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.054 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:29.054 18:16:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:29.054 [global] 00:09:29.054 thread=1 00:09:29.054 invalidate=1 00:09:29.054 rw=write 00:09:29.054 time_based=1 00:09:29.054 runtime=1 00:09:29.054 ioengine=libaio 00:09:29.054 direct=1 00:09:29.054 bs=4096 00:09:29.054 iodepth=1 00:09:29.054 norandommap=0 00:09:29.054 numjobs=1 00:09:29.054 00:09:29.054 verify_dump=1 00:09:29.054 verify_backlog=512 00:09:29.054 verify_state_save=0 00:09:29.054 do_verify=1 00:09:29.054 verify=crc32c-intel 00:09:29.054 [job0] 00:09:29.054 filename=/dev/nvme0n1 00:09:29.054 Could not set queue depth (nvme0n1) 00:09:29.054 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:29.054 fio-3.35 00:09:29.054 Starting 1 thread 00:09:30.011 00:09:30.011 job0: (groupid=0, jobs=1): err= 0: pid=1174892: Tue Oct 8 18:16:48 2024 00:09:30.011 read: IOPS=23, BW=93.1KiB/s (95.3kB/s)(96.0KiB/1031msec) 00:09:30.011 slat (nsec): min=5538, max=32553, avg=23983.46, stdev=9700.87 00:09:30.011 clat (usec): min=306, max=42042, avg=39708.21, stdev=8408.41 00:09:30.011 lat (usec): min=327, max=42060, avg=39732.19, stdev=8409.20 00:09:30.011 clat percentiles (usec): 00:09:30.011 | 1.00th=[ 306], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:30.011 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:09:30.011 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:30.011 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:30.011 | 99.99th=[42206] 00:09:30.011 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:09:30.011 slat (nsec): min=5171, max=35114, avg=8328.64, stdev=4328.37 00:09:30.011 clat (usec): min=118, max=245, avg=139.41, stdev=13.15 00:09:30.011 lat (usec): min=123, max=280, avg=147.74, stdev=14.68 00:09:30.011 clat percentiles (usec): 00:09:30.011 | 1.00th=[ 121], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 128], 00:09:30.011 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:09:30.011 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:09:30.011 | 99.00th=[ 172], 99.50th=[ 188], 99.90th=[ 245], 99.95th=[ 245], 00:09:30.011 | 99.99th=[ 245] 00:09:30.011 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:30.011 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:30.011 lat (usec) : 250=95.52%, 500=0.19% 00:09:30.011 lat (msec) : 50=4.29% 00:09:30.011 cpu : usr=0.10%, sys=0.49%, ctx=536, majf=0, minf=1 00:09:30.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:30.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.011 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:30.011 00:09:30.011 Run status group 0 (all jobs): 00:09:30.011 READ: bw=93.1KiB/s (95.3kB/s), 93.1KiB/s-93.1KiB/s (95.3kB/s-95.3kB/s), io=96.0KiB (98.3kB), run=1031-1031msec 00:09:30.011 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:09:30.011 00:09:30.011 Disk stats (read/write): 00:09:30.011 nvme0n1: ios=70/512, merge=0/0, ticks=807/68, in_queue=875, util=91.28% 00:09:30.011 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.301 rmmod nvme_tcp 00:09:30.301 rmmod nvme_fabrics 00:09:30.301 rmmod nvme_keyring 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1174375 ']' 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1174375 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1174375 ']' 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1174375 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1174375 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1174375' 00:09:30.301 killing process with pid 1174375 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1174375 00:09:30.301 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1174375 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.559 18:16:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.465 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:32.725 00:09:32.725 real 0m10.206s 00:09:32.725 user 0m22.840s 00:09:32.725 sys 0m2.414s 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:32.725 ************************************ 00:09:32.725 END TEST nvmf_nmic 00:09:32.725 ************************************ 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.725 ************************************ 00:09:32.725 START TEST nvmf_fio_target 00:09:32.725 ************************************ 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:32.725 * Looking for test storage... 00:09:32.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:32.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.725 --rc genhtml_branch_coverage=1 00:09:32.725 --rc genhtml_function_coverage=1 00:09:32.725 --rc genhtml_legend=1 00:09:32.725 --rc geninfo_all_blocks=1 00:09:32.725 --rc geninfo_unexecuted_blocks=1 00:09:32.725 00:09:32.725 ' 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:32.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.725 --rc genhtml_branch_coverage=1 00:09:32.725 --rc genhtml_function_coverage=1 00:09:32.725 --rc genhtml_legend=1 00:09:32.725 --rc geninfo_all_blocks=1 00:09:32.725 --rc geninfo_unexecuted_blocks=1 00:09:32.725 00:09:32.725 ' 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:32.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.725 --rc genhtml_branch_coverage=1 00:09:32.725 --rc genhtml_function_coverage=1 00:09:32.725 --rc genhtml_legend=1 00:09:32.725 --rc geninfo_all_blocks=1 00:09:32.725 --rc geninfo_unexecuted_blocks=1 00:09:32.725 00:09:32.725 ' 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:32.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.725 --rc genhtml_branch_coverage=1 00:09:32.725 --rc genhtml_function_coverage=1 00:09:32.725 --rc genhtml_legend=1 00:09:32.725 --rc geninfo_all_blocks=1 00:09:32.725 --rc geninfo_unexecuted_blocks=1 00:09:32.725 00:09:32.725 ' 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:32.725 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:32.726 18:16:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:35.258 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.258 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:35.259 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:35.259 Found net devices under 0000:09:00.0: cvl_0_0 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:35.259 Found net devices under 0000:09:00.1: cvl_0_1 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:35.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:09:35.259 00:09:35.259 --- 10.0.0.2 ping statistics --- 00:09:35.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.259 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:09:35.259 00:09:35.259 --- 10.0.0.1 ping statistics --- 00:09:35.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.259 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1177106 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1177106 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1177106 ']' 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.259 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.259 [2024-10-08 18:16:53.461181] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:09:35.259 [2024-10-08 18:16:53.461258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.259 [2024-10-08 18:16:53.523172] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.518 [2024-10-08 18:16:53.631037] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.518 [2024-10-08 18:16:53.631097] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.518 [2024-10-08 18:16:53.631124] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.518 [2024-10-08 18:16:53.631135] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.518 [2024-10-08 18:16:53.631144] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.518 [2024-10-08 18:16:53.632779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.518 [2024-10-08 18:16:53.632843] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.518 [2024-10-08 18:16:53.632952] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.518 [2024-10-08 18:16:53.632956] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.518 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.518 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:35.518 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:35.518 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.518 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.518 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.518 18:16:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:35.776 [2024-10-08 18:16:54.055168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.034 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.291 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:36.291 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.549 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:36.549 18:16:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.806 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:36.806 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.064 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:37.064 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:37.322 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.579 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:37.579 18:16:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.145 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:38.145 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.145 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:38.403 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:38.660 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:38.660 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:38.660 18:16:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.225 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:39.225 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:39.225 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.485 [2024-10-08 18:16:57.738092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.485 18:16:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:39.742 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:39.998 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:40.932 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:40.932 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:40.932 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:40.932 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:40.932 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:40.932 18:16:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:42.831 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:42.831 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:42.831 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:42.831 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:42.831 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:42.831 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:42.831 18:17:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:42.831 [global] 00:09:42.831 thread=1 00:09:42.831 invalidate=1 00:09:42.831 rw=write 00:09:42.831 time_based=1 00:09:42.831 runtime=1 00:09:42.831 ioengine=libaio 00:09:42.831 direct=1 00:09:42.831 bs=4096 00:09:42.831 iodepth=1 00:09:42.831 norandommap=0 00:09:42.831 numjobs=1 00:09:42.831 00:09:42.831 verify_dump=1 00:09:42.831 verify_backlog=512 00:09:42.831 verify_state_save=0 00:09:42.831 do_verify=1 00:09:42.831 verify=crc32c-intel 00:09:42.831 [job0] 00:09:42.831 filename=/dev/nvme0n1 00:09:42.831 [job1] 00:09:42.831 filename=/dev/nvme0n2 00:09:42.831 [job2] 00:09:42.831 filename=/dev/nvme0n3 00:09:42.831 [job3] 00:09:42.831 filename=/dev/nvme0n4 00:09:42.831 Could not set queue depth (nvme0n1) 00:09:42.831 Could not set queue depth (nvme0n2) 00:09:42.831 Could not set queue depth (nvme0n3) 00:09:42.831 Could not set queue depth (nvme0n4) 00:09:43.088 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.088 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.088 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.088 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:43.088 fio-3.35 00:09:43.088 Starting 4 threads 00:09:44.460 00:09:44.460 job0: (groupid=0, jobs=1): err= 0: pid=1178178: Tue Oct 8 18:17:02 2024 00:09:44.460 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:44.460 slat (nsec): min=5486, max=63067, avg=12617.99, stdev=6381.91 00:09:44.460 clat (usec): min=166, max=3744, avg=243.94, stdev=117.84 00:09:44.460 lat (usec): min=173, max=3760, avg=256.56, stdev=119.24 00:09:44.460 clat percentiles (usec): 00:09:44.460 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 208], 00:09:44.460 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 239], 00:09:44.460 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 318], 00:09:44.460 | 99.00th=[ 545], 99.50th=[ 562], 99.90th=[ 635], 99.95th=[ 3261], 00:09:44.460 | 99.99th=[ 3752] 00:09:44.460 write: IOPS=2240, BW=8963KiB/s (9178kB/s)(8972KiB/1001msec); 0 zone resets 00:09:44.460 slat (nsec): min=7232, max=80134, avg=16364.68, stdev=7508.15 00:09:44.460 clat (usec): min=126, max=1072, avg=186.94, stdev=54.14 00:09:44.460 lat (usec): min=135, max=1093, avg=203.30, stdev=56.06 00:09:44.460 clat percentiles (usec): 00:09:44.460 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 155], 00:09:44.460 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:09:44.460 | 70.00th=[ 190], 80.00th=[ 206], 90.00th=[ 237], 95.00th=[ 265], 00:09:44.460 | 99.00th=[ 424], 99.50th=[ 449], 99.90th=[ 742], 99.95th=[ 766], 00:09:44.460 | 99.99th=[ 1074] 00:09:44.460 bw ( KiB/s): min= 8192, max= 8192, per=44.26%, avg=8192.00, stdev= 0.00, samples=1 00:09:44.460 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:44.460 lat (usec) : 250=85.55%, 500=13.40%, 750=0.96%, 1000=0.02% 00:09:44.460 lat (msec) : 2=0.02%, 4=0.05% 00:09:44.460 cpu : usr=5.50%, sys=7.70%, ctx=4291, majf=0, minf=2 00:09:44.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.460 issued rwts: total=2048,2243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.460 job1: (groupid=0, jobs=1): err= 0: pid=1178179: Tue Oct 8 18:17:02 2024 00:09:44.460 read: IOPS=1455, BW=5822KiB/s (5962kB/s)(5904KiB/1014msec) 00:09:44.460 slat (nsec): min=5867, max=53610, avg=13789.93, stdev=5999.17 00:09:44.460 clat (usec): min=182, max=41163, avg=426.66, stdev=2802.31 00:09:44.460 lat (usec): min=196, max=41180, avg=440.45, stdev=2802.55 00:09:44.460 clat percentiles (usec): 00:09:44.460 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 215], 00:09:44.460 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 239], 00:09:44.460 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:09:44.460 | 99.00th=[ 338], 99.50th=[ 367], 99.90th=[41157], 99.95th=[41157], 00:09:44.460 | 99.99th=[41157] 00:09:44.460 write: IOPS=1514, BW=6059KiB/s (6205kB/s)(6144KiB/1014msec); 0 zone resets 00:09:44.460 slat (usec): min=7, max=42492, avg=43.79, stdev=1083.83 00:09:44.460 clat (usec): min=130, max=2395, avg=184.41, stdev=73.95 00:09:44.460 lat (usec): min=138, max=42726, avg=228.21, stdev=1087.65 00:09:44.460 clat percentiles (usec): 00:09:44.460 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 157], 00:09:44.460 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:09:44.460 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 221], 95.00th=[ 241], 00:09:44.460 | 99.00th=[ 318], 99.50th=[ 611], 99.90th=[ 922], 99.95th=[ 2409], 00:09:44.460 | 99.99th=[ 2409] 00:09:44.460 bw ( KiB/s): min= 4096, max= 8192, per=33.20%, avg=6144.00, stdev=2896.31, samples=2 00:09:44.460 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:44.460 lat (usec) : 250=90.31%, 500=9.16%, 750=0.23%, 1000=0.03% 00:09:44.460 lat (msec) : 4=0.03%, 50=0.23% 00:09:44.460 cpu : usr=3.65%, sys=5.63%, ctx=3014, majf=0, minf=1 00:09:44.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.460 issued rwts: total=1476,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.460 job2: (groupid=0, jobs=1): err= 0: pid=1178181: Tue Oct 8 18:17:02 2024 00:09:44.460 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:09:44.460 slat (nsec): min=7555, max=36205, avg=19942.73, stdev=8546.78 00:09:44.460 clat (usec): min=40885, max=41051, avg=40972.64, stdev=45.34 00:09:44.460 lat (usec): min=40902, max=41068, avg=40992.59, stdev=47.24 00:09:44.460 clat percentiles (usec): 00:09:44.460 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:44.460 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:44.460 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:44.460 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:44.460 | 99.99th=[41157] 00:09:44.460 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:09:44.460 slat (nsec): min=6395, max=63542, avg=15702.21, stdev=8193.64 00:09:44.460 clat (usec): min=143, max=915, avg=242.62, stdev=64.78 00:09:44.460 lat (usec): min=166, max=932, avg=258.32, stdev=63.91 00:09:44.460 clat percentiles (usec): 00:09:44.460 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 196], 20.00th=[ 210], 00:09:44.460 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 239], 00:09:44.460 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 297], 95.00th=[ 371], 00:09:44.460 | 99.00th=[ 457], 99.50th=[ 578], 99.90th=[ 914], 99.95th=[ 914], 00:09:44.460 | 99.99th=[ 914] 00:09:44.460 bw ( KiB/s): min= 4096, max= 4096, per=22.13%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.460 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.460 lat (usec) : 250=72.47%, 500=22.47%, 750=0.75%, 1000=0.19% 00:09:44.460 lat (msec) : 50=4.12% 00:09:44.460 cpu : usr=0.19%, sys=0.87%, ctx=537, majf=0, minf=1 00:09:44.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.460 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.460 job3: (groupid=0, jobs=1): err= 0: pid=1178182: Tue Oct 8 18:17:02 2024 00:09:44.460 read: IOPS=116, BW=466KiB/s (477kB/s)(484KiB/1038msec) 00:09:44.460 slat (nsec): min=7034, max=35875, avg=10972.12, stdev=5997.93 00:09:44.460 clat (usec): min=191, max=41213, avg=7664.06, stdev=15776.14 00:09:44.460 lat (usec): min=206, max=41230, avg=7675.03, stdev=15780.22 00:09:44.460 clat percentiles (usec): 00:09:44.460 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:09:44.460 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 241], 00:09:44.460 | 70.00th=[ 277], 80.00th=[ 832], 90.00th=[41157], 95.00th=[41157], 00:09:44.460 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:44.460 | 99.99th=[41157] 00:09:44.460 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:09:44.460 slat (nsec): min=6222, max=54766, avg=13192.58, stdev=6365.99 00:09:44.460 clat (usec): min=144, max=1013, avg=196.54, stdev=62.45 00:09:44.460 lat (usec): min=153, max=1020, avg=209.73, stdev=62.52 00:09:44.460 clat percentiles (usec): 00:09:44.460 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:09:44.460 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 196], 00:09:44.460 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 229], 95.00th=[ 260], 00:09:44.460 | 99.00th=[ 302], 99.50th=[ 766], 99.90th=[ 1012], 99.95th=[ 1012], 00:09:44.460 | 99.99th=[ 1012] 00:09:44.460 bw ( KiB/s): min= 4096, max= 4096, per=22.13%, avg=4096.00, stdev= 0.00, samples=1 00:09:44.460 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:44.460 lat (usec) : 250=88.15%, 500=7.27%, 750=0.16%, 1000=0.63% 00:09:44.460 lat (msec) : 2=0.32%, 50=3.48% 00:09:44.460 cpu : usr=0.39%, sys=0.77%, ctx=635, majf=0, minf=1 00:09:44.460 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:44.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.460 issued rwts: total=121,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.460 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:44.460 00:09:44.460 Run status group 0 (all jobs): 00:09:44.460 READ: bw=13.8MiB/s (14.5MB/s), 84.9KiB/s-8184KiB/s (87.0kB/s-8380kB/s), io=14.3MiB (15.0MB), run=1001-1038msec 00:09:44.460 WRITE: bw=18.1MiB/s (19.0MB/s), 1973KiB/s-8963KiB/s (2020kB/s-9178kB/s), io=18.8MiB (19.7MB), run=1001-1038msec 00:09:44.460 00:09:44.460 Disk stats (read/write): 00:09:44.460 nvme0n1: ios=1639/2048, merge=0/0, ticks=375/351, in_queue=726, util=86.67% 00:09:44.460 nvme0n2: ios=1525/1536, merge=0/0, ticks=785/250, in_queue=1035, util=91.36% 00:09:44.460 nvme0n3: ios=66/512, merge=0/0, ticks=974/118, in_queue=1092, util=95.50% 00:09:44.460 nvme0n4: ios=173/512, merge=0/0, ticks=852/94, in_queue=946, util=94.42% 00:09:44.460 18:17:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:44.460 [global] 00:09:44.460 thread=1 00:09:44.460 invalidate=1 00:09:44.460 rw=randwrite 00:09:44.460 time_based=1 00:09:44.460 runtime=1 00:09:44.460 ioengine=libaio 00:09:44.460 direct=1 00:09:44.460 bs=4096 00:09:44.460 iodepth=1 00:09:44.460 norandommap=0 00:09:44.461 numjobs=1 00:09:44.461 00:09:44.461 verify_dump=1 00:09:44.461 verify_backlog=512 00:09:44.461 verify_state_save=0 00:09:44.461 do_verify=1 00:09:44.461 verify=crc32c-intel 00:09:44.461 [job0] 00:09:44.461 filename=/dev/nvme0n1 00:09:44.461 [job1] 00:09:44.461 filename=/dev/nvme0n2 00:09:44.461 [job2] 00:09:44.461 filename=/dev/nvme0n3 00:09:44.461 [job3] 00:09:44.461 filename=/dev/nvme0n4 00:09:44.461 Could not set queue depth (nvme0n1) 00:09:44.461 Could not set queue depth (nvme0n2) 00:09:44.461 Could not set queue depth (nvme0n3) 00:09:44.461 Could not set queue depth (nvme0n4) 00:09:44.461 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.461 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.461 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.461 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.461 fio-3.35 00:09:44.461 Starting 4 threads 00:09:45.832 00:09:45.832 job0: (groupid=0, jobs=1): err= 0: pid=1178415: Tue Oct 8 18:17:03 2024 00:09:45.832 read: IOPS=1436, BW=5746KiB/s (5884kB/s)(5752KiB/1001msec) 00:09:45.832 slat (nsec): min=6559, max=58160, avg=15252.24, stdev=7902.67 00:09:45.832 clat (usec): min=203, max=41058, avg=448.98, stdev=1518.62 00:09:45.832 lat (usec): min=212, max=41073, avg=464.23, stdev=1518.76 00:09:45.832 clat percentiles (usec): 00:09:45.832 | 1.00th=[ 225], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 306], 00:09:45.832 | 30.00th=[ 326], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 383], 00:09:45.832 | 70.00th=[ 441], 80.00th=[ 506], 90.00th=[ 537], 95.00th=[ 578], 00:09:45.832 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[41157], 99.95th=[41157], 00:09:45.832 | 99.99th=[41157] 00:09:45.832 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:45.832 slat (nsec): min=6255, max=70401, avg=14720.09, stdev=7009.96 00:09:45.832 clat (usec): min=130, max=1071, avg=193.23, stdev=45.03 00:09:45.832 lat (usec): min=138, max=1141, avg=207.95, stdev=47.56 00:09:45.832 clat percentiles (usec): 00:09:45.832 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 163], 00:09:45.832 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:09:45.832 | 70.00th=[ 200], 80.00th=[ 215], 90.00th=[ 247], 95.00th=[ 273], 00:09:45.832 | 99.00th=[ 343], 99.50th=[ 367], 99.90th=[ 392], 99.95th=[ 1074], 00:09:45.832 | 99.99th=[ 1074] 00:09:45.832 bw ( KiB/s): min= 7840, max= 7840, per=44.07%, avg=7840.00, stdev= 0.00, samples=1 00:09:45.832 iops : min= 1960, max= 1960, avg=1960.00, stdev= 0.00, samples=1 00:09:45.832 lat (usec) : 250=48.25%, 500=41.49%, 750=10.05%, 1000=0.10% 00:09:45.833 lat (msec) : 2=0.03%, 50=0.07% 00:09:45.833 cpu : usr=3.30%, sys=5.80%, ctx=2974, majf=0, minf=2 00:09:45.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.833 issued rwts: total=1438,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.833 job1: (groupid=0, jobs=1): err= 0: pid=1178416: Tue Oct 8 18:17:03 2024 00:09:45.833 read: IOPS=1670, BW=6681KiB/s (6842kB/s)(6688KiB/1001msec) 00:09:45.833 slat (nsec): min=5736, max=64008, avg=17266.97, stdev=10314.83 00:09:45.833 clat (usec): min=195, max=4893, avg=328.28, stdev=136.69 00:09:45.833 lat (usec): min=202, max=4905, avg=345.54, stdev=140.06 00:09:45.833 clat percentiles (usec): 00:09:45.833 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 251], 00:09:45.833 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 343], 00:09:45.833 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 429], 95.00th=[ 478], 00:09:45.833 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 668], 99.95th=[ 4883], 00:09:45.833 | 99.99th=[ 4883] 00:09:45.833 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:45.833 slat (nsec): min=7599, max=48858, avg=14834.21, stdev=5982.45 00:09:45.833 clat (usec): min=135, max=365, avg=183.64, stdev=33.22 00:09:45.833 lat (usec): min=144, max=398, avg=198.47, stdev=34.31 00:09:45.833 clat percentiles (usec): 00:09:45.833 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:45.833 | 30.00th=[ 161], 40.00th=[ 172], 50.00th=[ 182], 60.00th=[ 188], 00:09:45.833 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 221], 95.00th=[ 245], 00:09:45.833 | 99.00th=[ 310], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 367], 00:09:45.833 | 99.99th=[ 367] 00:09:45.833 bw ( KiB/s): min= 8192, max= 8192, per=46.04%, avg=8192.00, stdev= 0.00, samples=1 00:09:45.833 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:45.833 lat (usec) : 250=61.56%, 500=36.96%, 750=1.45% 00:09:45.833 lat (msec) : 10=0.03% 00:09:45.833 cpu : usr=4.10%, sys=5.10%, ctx=3721, majf=0, minf=1 00:09:45.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.833 issued rwts: total=1672,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.833 job2: (groupid=0, jobs=1): err= 0: pid=1178417: Tue Oct 8 18:17:03 2024 00:09:45.833 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:09:45.833 slat (nsec): min=8661, max=35674, avg=21620.14, stdev=8895.95 00:09:45.833 clat (usec): min=40784, max=41029, avg=40961.46, stdev=49.85 00:09:45.833 lat (usec): min=40793, max=41047, avg=40983.08, stdev=49.99 00:09:45.833 clat percentiles (usec): 00:09:45.833 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:45.833 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:45.833 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:45.833 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:45.833 | 99.99th=[41157] 00:09:45.833 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:09:45.833 slat (nsec): min=7710, max=49909, avg=15152.82, stdev=6814.13 00:09:45.833 clat (usec): min=156, max=406, avg=216.50, stdev=43.97 00:09:45.833 lat (usec): min=166, max=432, avg=231.65, stdev=44.94 00:09:45.833 clat percentiles (usec): 00:09:45.833 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:09:45.833 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 215], 00:09:45.833 | 70.00th=[ 225], 80.00th=[ 245], 90.00th=[ 273], 95.00th=[ 314], 00:09:45.833 | 99.00th=[ 379], 99.50th=[ 392], 99.90th=[ 408], 99.95th=[ 408], 00:09:45.833 | 99.99th=[ 408] 00:09:45.833 bw ( KiB/s): min= 4096, max= 4096, per=23.02%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.833 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.833 lat (usec) : 250=79.21%, 500=16.67% 00:09:45.833 lat (msec) : 50=4.12% 00:09:45.833 cpu : usr=0.78%, sys=0.49%, ctx=536, majf=0, minf=1 00:09:45.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.833 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.833 job3: (groupid=0, jobs=1): err= 0: pid=1178418: Tue Oct 8 18:17:03 2024 00:09:45.833 read: IOPS=175, BW=703KiB/s (720kB/s)(728KiB/1036msec) 00:09:45.833 slat (nsec): min=8744, max=56502, avg=21462.48, stdev=6914.64 00:09:45.833 clat (usec): min=230, max=41352, avg=4995.39, stdev=13000.93 00:09:45.833 lat (usec): min=248, max=41373, avg=5016.85, stdev=13000.32 00:09:45.833 clat percentiles (usec): 00:09:45.833 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:09:45.833 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 306], 00:09:45.833 | 70.00th=[ 375], 80.00th=[ 474], 90.00th=[40633], 95.00th=[41157], 00:09:45.833 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:45.833 | 99.99th=[41157] 00:09:45.833 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:09:45.833 slat (nsec): min=9336, max=56286, avg=18646.63, stdev=8210.78 00:09:45.833 clat (usec): min=150, max=369, avg=213.22, stdev=38.46 00:09:45.833 lat (usec): min=161, max=414, avg=231.87, stdev=40.68 00:09:45.833 clat percentiles (usec): 00:09:45.833 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 190], 00:09:45.833 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:09:45.833 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 258], 95.00th=[ 306], 00:09:45.833 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 371], 99.95th=[ 371], 00:09:45.833 | 99.99th=[ 371] 00:09:45.833 bw ( KiB/s): min= 4096, max= 4096, per=23.02%, avg=4096.00, stdev= 0.00, samples=1 00:09:45.833 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:45.833 lat (usec) : 250=68.59%, 500=27.67%, 750=0.72% 00:09:45.833 lat (msec) : 50=3.03% 00:09:45.833 cpu : usr=0.58%, sys=1.93%, ctx=695, majf=0, minf=1 00:09:45.833 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.833 issued rwts: total=182,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.833 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.833 00:09:45.833 Run status group 0 (all jobs): 00:09:45.833 READ: bw=12.5MiB/s (13.1MB/s), 86.0KiB/s-6681KiB/s (88.1kB/s-6842kB/s), io=12.9MiB (13.6MB), run=1001-1036msec 00:09:45.833 WRITE: bw=17.4MiB/s (18.2MB/s), 1977KiB/s-8184KiB/s (2024kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1036msec 00:09:45.833 00:09:45.833 Disk stats (read/write): 00:09:45.833 nvme0n1: ios=1148/1536, merge=0/0, ticks=679/272, in_queue=951, util=91.18% 00:09:45.833 nvme0n2: ios=1539/1536, merge=0/0, ticks=1463/271, in_queue=1734, util=96.96% 00:09:45.833 nvme0n3: ios=42/512, merge=0/0, ticks=1682/104, in_queue=1786, util=98.34% 00:09:45.833 nvme0n4: ios=220/512, merge=0/0, ticks=1479/97, in_queue=1576, util=97.17% 00:09:45.833 18:17:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:45.833 [global] 00:09:45.833 thread=1 00:09:45.833 invalidate=1 00:09:45.833 rw=write 00:09:45.833 time_based=1 00:09:45.833 runtime=1 00:09:45.833 ioengine=libaio 00:09:45.833 direct=1 00:09:45.833 bs=4096 00:09:45.833 iodepth=128 00:09:45.833 norandommap=0 00:09:45.833 numjobs=1 00:09:45.833 00:09:45.833 verify_dump=1 00:09:45.833 verify_backlog=512 00:09:45.833 verify_state_save=0 00:09:45.833 do_verify=1 00:09:45.833 verify=crc32c-intel 00:09:45.833 [job0] 00:09:45.833 filename=/dev/nvme0n1 00:09:45.833 [job1] 00:09:45.833 filename=/dev/nvme0n2 00:09:45.833 [job2] 00:09:45.833 filename=/dev/nvme0n3 00:09:45.833 [job3] 00:09:45.833 filename=/dev/nvme0n4 00:09:45.833 Could not set queue depth (nvme0n1) 00:09:45.833 Could not set queue depth (nvme0n2) 00:09:45.833 Could not set queue depth (nvme0n3) 00:09:45.833 Could not set queue depth (nvme0n4) 00:09:45.833 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.833 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.833 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.833 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:45.833 fio-3.35 00:09:45.833 Starting 4 threads 00:09:47.206 00:09:47.206 job0: (groupid=0, jobs=1): err= 0: pid=1178750: Tue Oct 8 18:17:05 2024 00:09:47.206 read: IOPS=2894, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1006msec) 00:09:47.206 slat (usec): min=3, max=12912, avg=140.58, stdev=852.45 00:09:47.206 clat (usec): min=4410, max=43174, avg=16174.86, stdev=7098.41 00:09:47.206 lat (usec): min=6231, max=43192, avg=16315.43, stdev=7155.82 00:09:47.206 clat percentiles (usec): 00:09:47.206 | 1.00th=[ 7242], 5.00th=[11076], 10.00th=[11600], 20.00th=[11994], 00:09:47.206 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[14484], 00:09:47.206 | 70.00th=[15401], 80.00th=[17695], 90.00th=[27657], 95.00th=[33817], 00:09:47.206 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:09:47.206 | 99.99th=[43254] 00:09:47.206 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:09:47.206 slat (usec): min=3, max=28706, avg=181.83, stdev=1022.61 00:09:47.206 clat (usec): min=3563, max=84774, avg=23285.79, stdev=9400.43 00:09:47.206 lat (usec): min=3572, max=84800, avg=23467.62, stdev=9509.58 00:09:47.206 clat percentiles (usec): 00:09:47.206 | 1.00th=[ 5932], 5.00th=[ 8717], 10.00th=[10552], 20.00th=[12649], 00:09:47.206 | 30.00th=[19268], 40.00th=[24249], 50.00th=[24511], 60.00th=[25035], 00:09:47.206 | 70.00th=[25560], 80.00th=[30540], 90.00th=[35914], 95.00th=[37487], 00:09:47.206 | 99.00th=[38011], 99.50th=[43254], 99.90th=[84411], 99.95th=[84411], 00:09:47.206 | 99.99th=[84411] 00:09:47.206 bw ( KiB/s): min=12288, max=12288, per=19.17%, avg=12288.00, stdev= 0.00, samples=2 00:09:47.206 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:47.206 lat (msec) : 4=0.10%, 10=5.98%, 20=50.22%, 50=43.45%, 100=0.25% 00:09:47.206 cpu : usr=4.68%, sys=6.07%, ctx=375, majf=0, minf=1 00:09:47.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:47.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.206 issued rwts: total=2912,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.206 job1: (groupid=0, jobs=1): err= 0: pid=1178763: Tue Oct 8 18:17:05 2024 00:09:47.206 read: IOPS=5429, BW=21.2MiB/s (22.2MB/s)(21.4MiB/1007msec) 00:09:47.206 slat (usec): min=2, max=10100, avg=93.85, stdev=632.34 00:09:47.206 clat (usec): min=2574, max=21210, avg=11772.93, stdev=2803.14 00:09:47.206 lat (usec): min=4108, max=21227, avg=11866.79, stdev=2841.20 00:09:47.206 clat percentiles (usec): 00:09:47.206 | 1.00th=[ 5211], 5.00th=[ 8160], 10.00th=[ 9372], 20.00th=[10028], 00:09:47.206 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:09:47.206 | 70.00th=[12256], 80.00th=[13042], 90.00th=[15795], 95.00th=[18220], 00:09:47.206 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:09:47.206 | 99.99th=[21103] 00:09:47.206 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:09:47.206 slat (usec): min=3, max=21372, avg=77.75, stdev=466.80 00:09:47.206 clat (usec): min=2162, max=21139, avg=10654.10, stdev=2091.42 00:09:47.206 lat (usec): min=2186, max=29243, avg=10731.85, stdev=2137.23 00:09:47.206 clat percentiles (usec): 00:09:47.206 | 1.00th=[ 3687], 5.00th=[ 5669], 10.00th=[ 7635], 20.00th=[ 9765], 00:09:47.206 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:09:47.206 | 70.00th=[11600], 80.00th=[11863], 90.00th=[11994], 95.00th=[12256], 00:09:47.206 | 99.00th=[13566], 99.50th=[14091], 99.90th=[21103], 99.95th=[21103], 00:09:47.206 | 99.99th=[21103] 00:09:47.206 bw ( KiB/s): min=22480, max=22576, per=35.14%, avg=22528.00, stdev=67.88, samples=2 00:09:47.206 iops : min= 5620, max= 5644, avg=5632.00, stdev=16.97, samples=2 00:09:47.206 lat (msec) : 4=0.68%, 10=18.96%, 20=79.32%, 50=1.03% 00:09:47.206 cpu : usr=5.96%, sys=10.64%, ctx=614, majf=0, minf=1 00:09:47.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:47.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.206 issued rwts: total=5468,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.207 job2: (groupid=0, jobs=1): err= 0: pid=1178764: Tue Oct 8 18:17:05 2024 00:09:47.207 read: IOPS=4853, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1007msec) 00:09:47.207 slat (usec): min=3, max=11846, avg=108.45, stdev=771.27 00:09:47.207 clat (usec): min=3028, max=24790, avg=13842.99, stdev=3430.32 00:09:47.207 lat (usec): min=4474, max=24806, avg=13951.44, stdev=3467.93 00:09:47.207 clat percentiles (usec): 00:09:47.207 | 1.00th=[ 6194], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11994], 00:09:47.207 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:09:47.207 | 70.00th=[14222], 80.00th=[16450], 90.00th=[19530], 95.00th=[21627], 00:09:47.207 | 99.00th=[23462], 99.50th=[23987], 99.90th=[24773], 99.95th=[24773], 00:09:47.207 | 99.99th=[24773] 00:09:47.207 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:09:47.207 slat (usec): min=3, max=10460, avg=83.29, stdev=504.39 00:09:47.207 clat (usec): min=1377, max=24740, avg=11735.70, stdev=2589.86 00:09:47.207 lat (usec): min=1390, max=24747, avg=11818.99, stdev=2642.08 00:09:47.207 clat percentiles (usec): 00:09:47.207 | 1.00th=[ 4359], 5.00th=[ 7046], 10.00th=[ 7963], 20.00th=[ 9110], 00:09:47.207 | 30.00th=[11338], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:09:47.207 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13829], 00:09:47.207 | 99.00th=[17695], 99.50th=[17695], 99.90th=[24249], 99.95th=[24249], 00:09:47.207 | 99.99th=[24773] 00:09:47.207 bw ( KiB/s): min=20480, max=20480, per=31.94%, avg=20480.00, stdev= 0.00, samples=2 00:09:47.207 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:47.207 lat (msec) : 2=0.04%, 4=0.35%, 10=14.29%, 20=80.99%, 50=4.33% 00:09:47.207 cpu : usr=5.67%, sys=9.54%, ctx=491, majf=0, minf=2 00:09:47.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:47.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.207 issued rwts: total=4887,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.207 job3: (groupid=0, jobs=1): err= 0: pid=1178765: Tue Oct 8 18:17:05 2024 00:09:47.207 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:09:47.207 slat (usec): min=3, max=26231, avg=278.39, stdev=1594.30 00:09:47.207 clat (usec): min=15586, max=95546, avg=31994.58, stdev=14961.43 00:09:47.207 lat (usec): min=17386, max=95561, avg=32272.97, stdev=15041.82 00:09:47.207 clat percentiles (usec): 00:09:47.207 | 1.00th=[18220], 5.00th=[19792], 10.00th=[21365], 20.00th=[23462], 00:09:47.207 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25297], 00:09:47.207 | 70.00th=[25822], 80.00th=[42730], 90.00th=[62653], 95.00th=[64750], 00:09:47.207 | 99.00th=[79168], 99.50th=[79168], 99.90th=[95945], 99.95th=[95945], 00:09:47.207 | 99.99th=[95945] 00:09:47.207 write: IOPS=2303, BW=9213KiB/s (9434kB/s)(9268KiB/1006msec); 0 zone resets 00:09:47.207 slat (usec): min=4, max=33294, avg=177.48, stdev=1173.00 00:09:47.207 clat (usec): min=1906, max=95952, avg=24688.46, stdev=15144.95 00:09:47.207 lat (usec): min=10349, max=95973, avg=24865.93, stdev=15133.75 00:09:47.207 clat percentiles (usec): 00:09:47.207 | 1.00th=[10814], 5.00th=[12518], 10.00th=[14353], 20.00th=[16712], 00:09:47.207 | 30.00th=[17957], 40.00th=[18744], 50.00th=[19006], 60.00th=[19268], 00:09:47.207 | 70.00th=[22414], 80.00th=[29754], 90.00th=[44827], 95.00th=[55837], 00:09:47.207 | 99.00th=[95945], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:09:47.207 | 99.99th=[95945] 00:09:47.207 bw ( KiB/s): min= 8192, max= 9320, per=13.66%, avg=8756.00, stdev=797.62, samples=2 00:09:47.207 iops : min= 2048, max= 2330, avg=2189.00, stdev=199.40, samples=2 00:09:47.207 lat (msec) : 2=0.02%, 20=38.21%, 50=50.19%, 100=11.57% 00:09:47.207 cpu : usr=3.08%, sys=4.08%, ctx=188, majf=0, minf=1 00:09:47.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:09:47.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.207 issued rwts: total=2048,2317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.207 00:09:47.207 Run status group 0 (all jobs): 00:09:47.207 READ: bw=59.4MiB/s (62.3MB/s), 8143KiB/s-21.2MiB/s (8339kB/s-22.2MB/s), io=59.8MiB (62.7MB), run=1006-1007msec 00:09:47.207 WRITE: bw=62.6MiB/s (65.7MB/s), 9213KiB/s-21.8MiB/s (9434kB/s-22.9MB/s), io=63.1MiB (66.1MB), run=1006-1007msec 00:09:47.207 00:09:47.207 Disk stats (read/write): 00:09:47.207 nvme0n1: ios=2445/2560, merge=0/0, ticks=37373/57461, in_queue=94834, util=97.29% 00:09:47.207 nvme0n2: ios=4659/4662, merge=0/0, ticks=48595/41999, in_queue=90594, util=92.68% 00:09:47.207 nvme0n3: ios=4153/4386, merge=0/0, ticks=53644/49520, in_queue=103164, util=90.62% 00:09:47.207 nvme0n4: ios=1672/2048, merge=0/0, ticks=15099/10125, in_queue=25224, util=99.37% 00:09:47.207 18:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:47.207 [global] 00:09:47.207 thread=1 00:09:47.207 invalidate=1 00:09:47.207 rw=randwrite 00:09:47.207 time_based=1 00:09:47.207 runtime=1 00:09:47.207 ioengine=libaio 00:09:47.207 direct=1 00:09:47.207 bs=4096 00:09:47.207 iodepth=128 00:09:47.207 norandommap=0 00:09:47.207 numjobs=1 00:09:47.207 00:09:47.207 verify_dump=1 00:09:47.207 verify_backlog=512 00:09:47.207 verify_state_save=0 00:09:47.207 do_verify=1 00:09:47.207 verify=crc32c-intel 00:09:47.207 [job0] 00:09:47.207 filename=/dev/nvme0n1 00:09:47.207 [job1] 00:09:47.207 filename=/dev/nvme0n2 00:09:47.207 [job2] 00:09:47.207 filename=/dev/nvme0n3 00:09:47.207 [job3] 00:09:47.207 filename=/dev/nvme0n4 00:09:47.207 Could not set queue depth (nvme0n1) 00:09:47.207 Could not set queue depth (nvme0n2) 00:09:47.207 Could not set queue depth (nvme0n3) 00:09:47.207 Could not set queue depth (nvme0n4) 00:09:47.465 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.465 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.465 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.465 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:47.465 fio-3.35 00:09:47.465 Starting 4 threads 00:09:48.838 00:09:48.838 job0: (groupid=0, jobs=1): err= 0: pid=1179000: Tue Oct 8 18:17:06 2024 00:09:48.838 read: IOPS=5294, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1003msec) 00:09:48.838 slat (usec): min=2, max=6231, avg=91.22, stdev=529.41 00:09:48.838 clat (usec): min=876, max=19026, avg=11539.11, stdev=1739.49 00:09:48.838 lat (usec): min=5012, max=19186, avg=11630.33, stdev=1777.39 00:09:48.838 clat percentiles (usec): 00:09:48.838 | 1.00th=[ 6194], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10552], 00:09:48.838 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11600], 00:09:48.838 | 70.00th=[12518], 80.00th=[13042], 90.00th=[13698], 95.00th=[14222], 00:09:48.838 | 99.00th=[15533], 99.50th=[16188], 99.90th=[17695], 99.95th=[18220], 00:09:48.838 | 99.99th=[19006] 00:09:48.838 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:09:48.838 slat (usec): min=3, max=5597, avg=82.84, stdev=379.00 00:09:48.838 clat (usec): min=4034, max=17842, avg=11647.14, stdev=1336.08 00:09:48.838 lat (usec): min=4041, max=18265, avg=11729.98, stdev=1360.93 00:09:48.838 clat percentiles (usec): 00:09:48.838 | 1.00th=[ 7570], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[10945], 00:09:48.838 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:09:48.838 | 70.00th=[12125], 80.00th=[12387], 90.00th=[13042], 95.00th=[13698], 00:09:48.838 | 99.00th=[15795], 99.50th=[16450], 99.90th=[17695], 99.95th=[17695], 00:09:48.838 | 99.99th=[17957] 00:09:48.838 bw ( KiB/s): min=21144, max=23912, per=32.31%, avg=22528.00, stdev=1957.27, samples=2 00:09:48.838 iops : min= 5286, max= 5978, avg=5632.00, stdev=489.32, samples=2 00:09:48.838 lat (usec) : 1000=0.01% 00:09:48.838 lat (msec) : 10=9.99%, 20=90.00% 00:09:48.838 cpu : usr=5.29%, sys=10.78%, ctx=625, majf=0, minf=1 00:09:48.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:48.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.839 issued rwts: total=5310,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.839 job1: (groupid=0, jobs=1): err= 0: pid=1179001: Tue Oct 8 18:17:06 2024 00:09:48.839 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:09:48.839 slat (usec): min=2, max=11606, avg=111.13, stdev=751.28 00:09:48.839 clat (usec): min=4051, max=33992, avg=13670.18, stdev=4236.71 00:09:48.839 lat (usec): min=4063, max=34010, avg=13781.31, stdev=4280.32 00:09:48.839 clat percentiles (usec): 00:09:48.839 | 1.00th=[ 6259], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10683], 00:09:48.839 | 30.00th=[11207], 40.00th=[12125], 50.00th=[12518], 60.00th=[13698], 00:09:48.839 | 70.00th=[14746], 80.00th=[15926], 90.00th=[18744], 95.00th=[22152], 00:09:48.839 | 99.00th=[31065], 99.50th=[32375], 99.90th=[33817], 99.95th=[33817], 00:09:48.839 | 99.99th=[33817] 00:09:48.839 write: IOPS=4780, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1012msec); 0 zone resets 00:09:48.839 slat (usec): min=4, max=11870, avg=92.03, stdev=492.17 00:09:48.839 clat (usec): min=1414, max=36496, avg=13494.69, stdev=5524.91 00:09:48.839 lat (usec): min=1427, max=36519, avg=13586.72, stdev=5569.41 00:09:48.839 clat percentiles (usec): 00:09:48.839 | 1.00th=[ 3064], 5.00th=[ 6325], 10.00th=[ 8848], 20.00th=[10683], 00:09:48.839 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12256], 60.00th=[12911], 00:09:48.839 | 70.00th=[13698], 80.00th=[14746], 90.00th=[22938], 95.00th=[26084], 00:09:48.839 | 99.00th=[32375], 99.50th=[33162], 99.90th=[36439], 99.95th=[36439], 00:09:48.839 | 99.99th=[36439] 00:09:48.839 bw ( KiB/s): min=17208, max=20480, per=27.03%, avg=18844.00, stdev=2313.65, samples=2 00:09:48.839 iops : min= 4302, max= 5120, avg=4711.00, stdev=578.41, samples=2 00:09:48.839 lat (msec) : 2=0.20%, 4=0.61%, 10=11.32%, 20=78.18%, 50=9.69% 00:09:48.839 cpu : usr=5.24%, sys=8.61%, ctx=548, majf=0, minf=1 00:09:48.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:48.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.839 issued rwts: total=4608,4838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.839 job2: (groupid=0, jobs=1): err= 0: pid=1179002: Tue Oct 8 18:17:06 2024 00:09:48.839 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1007msec) 00:09:48.839 slat (usec): min=2, max=35726, avg=180.32, stdev=1400.06 00:09:48.839 clat (usec): min=6156, max=93041, avg=21251.22, stdev=11477.97 00:09:48.839 lat (usec): min=6175, max=93057, avg=21431.54, stdev=11597.35 00:09:48.839 clat percentiles (usec): 00:09:48.839 | 1.00th=[ 7898], 5.00th=[11600], 10.00th=[12780], 20.00th=[14222], 00:09:48.839 | 30.00th=[14615], 40.00th=[16909], 50.00th=[19006], 60.00th=[20579], 00:09:48.839 | 70.00th=[20841], 80.00th=[22676], 90.00th=[36439], 95.00th=[45351], 00:09:48.839 | 99.00th=[76022], 99.50th=[76022], 99.90th=[76022], 99.95th=[76022], 00:09:48.839 | 99.99th=[92799] 00:09:48.839 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:09:48.839 slat (usec): min=3, max=15405, avg=165.79, stdev=903.74 00:09:48.839 clat (usec): min=1526, max=91157, avg=23659.68, stdev=16113.34 00:09:48.839 lat (usec): min=1536, max=91171, avg=23825.48, stdev=16204.06 00:09:48.839 clat percentiles (usec): 00:09:48.839 | 1.00th=[ 6194], 5.00th=[ 8029], 10.00th=[11600], 20.00th=[13173], 00:09:48.839 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15270], 60.00th=[23200], 00:09:48.839 | 70.00th=[26870], 80.00th=[28967], 90.00th=[50070], 95.00th=[61080], 00:09:48.839 | 99.00th=[74974], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:09:48.839 | 99.99th=[90702] 00:09:48.839 bw ( KiB/s): min= 8328, max=15312, per=16.95%, avg=11820.00, stdev=4938.43, samples=2 00:09:48.839 iops : min= 2082, max= 3828, avg=2955.00, stdev=1234.61, samples=2 00:09:48.839 lat (msec) : 2=0.11%, 10=4.63%, 20=51.95%, 50=36.35%, 100=6.97% 00:09:48.839 cpu : usr=3.68%, sys=3.88%, ctx=330, majf=0, minf=1 00:09:48.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:48.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.839 issued rwts: total=2570,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.839 job3: (groupid=0, jobs=1): err= 0: pid=1179003: Tue Oct 8 18:17:06 2024 00:09:48.839 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1006msec) 00:09:48.839 slat (usec): min=2, max=14710, avg=125.59, stdev=865.99 00:09:48.839 clat (usec): min=3168, max=40004, avg=15261.92, stdev=4498.22 00:09:48.839 lat (usec): min=4989, max=40009, avg=15387.51, stdev=4552.40 00:09:48.839 clat percentiles (usec): 00:09:48.839 | 1.00th=[ 6456], 5.00th=[ 9372], 10.00th=[11731], 20.00th=[12387], 00:09:48.839 | 30.00th=[12911], 40.00th=[13960], 50.00th=[14353], 60.00th=[14877], 00:09:48.839 | 70.00th=[15270], 80.00th=[17695], 90.00th=[21103], 95.00th=[24511], 00:09:48.839 | 99.00th=[31851], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:09:48.839 | 99.99th=[40109] 00:09:48.839 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:09:48.839 slat (usec): min=3, max=11443, avg=109.02, stdev=486.05 00:09:48.839 clat (usec): min=1304, max=39989, avg=15888.89, stdev=7672.61 00:09:48.839 lat (usec): min=1318, max=40001, avg=15997.91, stdev=7729.31 00:09:48.839 clat percentiles (usec): 00:09:48.839 | 1.00th=[ 3589], 5.00th=[ 5866], 10.00th=[ 8094], 20.00th=[10683], 00:09:48.839 | 30.00th=[13173], 40.00th=[13698], 50.00th=[13829], 60.00th=[14615], 00:09:48.839 | 70.00th=[15270], 80.00th=[19530], 90.00th=[27919], 95.00th=[34866], 00:09:48.839 | 99.00th=[39060], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:09:48.839 | 99.99th=[40109] 00:09:48.839 bw ( KiB/s): min=16384, max=16384, per=23.50%, avg=16384.00, stdev= 0.00, samples=2 00:09:48.839 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:48.839 lat (msec) : 2=0.12%, 4=0.78%, 10=11.44%, 20=72.12%, 50=15.54% 00:09:48.839 cpu : usr=4.78%, sys=8.06%, ctx=488, majf=0, minf=1 00:09:48.839 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:48.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.839 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.839 issued rwts: total=4088,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.839 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.839 00:09:48.839 Run status group 0 (all jobs): 00:09:48.839 READ: bw=64.0MiB/s (67.1MB/s), 9.97MiB/s-20.7MiB/s (10.5MB/s-21.7MB/s), io=64.8MiB (67.9MB), run=1003-1012msec 00:09:48.839 WRITE: bw=68.1MiB/s (71.4MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=68.9MiB (72.2MB), run=1003-1012msec 00:09:48.839 00:09:48.839 Disk stats (read/write): 00:09:48.839 nvme0n1: ios=4658/4637, merge=0/0, ticks=26242/25645, in_queue=51887, util=87.17% 00:09:48.839 nvme0n2: ios=4143/4223, merge=0/0, ticks=53821/49829, in_queue=103650, util=99.49% 00:09:48.839 nvme0n3: ios=2477/2560, merge=0/0, ticks=31430/29848, in_queue=61278, util=99.06% 00:09:48.839 nvme0n4: ios=3094/3584, merge=0/0, ticks=48254/56696, in_queue=104950, util=98.53% 00:09:48.839 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:48.839 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1179139 00:09:48.839 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:48.839 18:17:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:48.839 [global] 00:09:48.839 thread=1 00:09:48.839 invalidate=1 00:09:48.839 rw=read 00:09:48.839 time_based=1 00:09:48.839 runtime=10 00:09:48.839 ioengine=libaio 00:09:48.839 direct=1 00:09:48.839 bs=4096 00:09:48.839 iodepth=1 00:09:48.839 norandommap=1 00:09:48.839 numjobs=1 00:09:48.839 00:09:48.839 [job0] 00:09:48.839 filename=/dev/nvme0n1 00:09:48.839 [job1] 00:09:48.839 filename=/dev/nvme0n2 00:09:48.839 [job2] 00:09:48.839 filename=/dev/nvme0n3 00:09:48.839 [job3] 00:09:48.839 filename=/dev/nvme0n4 00:09:48.839 Could not set queue depth (nvme0n1) 00:09:48.839 Could not set queue depth (nvme0n2) 00:09:48.839 Could not set queue depth (nvme0n3) 00:09:48.839 Could not set queue depth (nvme0n4) 00:09:48.839 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.839 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.839 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.839 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:48.839 fio-3.35 00:09:48.839 Starting 4 threads 00:09:52.118 18:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:52.118 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:52.118 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=299008, buflen=4096 00:09:52.118 fio: pid=1179230, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.118 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.119 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:52.119 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=37367808, buflen=4096 00:09:52.119 fio: pid=1179229, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.376 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.376 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:52.376 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6852608, buflen=4096 00:09:52.377 fio: pid=1179227, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.942 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=58155008, buflen=4096 00:09:52.942 fio: pid=1179228, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.942 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:52.942 18:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:52.942 00:09:52.942 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1179227: Tue Oct 8 18:17:11 2024 00:09:52.942 read: IOPS=479, BW=1916KiB/s (1962kB/s)(6692KiB/3492msec) 00:09:52.942 slat (usec): min=4, max=28918, avg=35.59, stdev=806.08 00:09:52.942 clat (usec): min=189, max=41242, avg=2035.17, stdev=8271.99 00:09:52.942 lat (usec): min=195, max=70055, avg=2070.76, stdev=8438.68 00:09:52.942 clat percentiles (usec): 00:09:52.942 | 1.00th=[ 198], 5.00th=[ 212], 10.00th=[ 221], 20.00th=[ 237], 00:09:52.942 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 269], 00:09:52.942 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 388], 95.00th=[ 490], 00:09:52.942 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:52.942 | 99.99th=[41157] 00:09:52.942 bw ( KiB/s): min= 96, max= 6816, per=8.41%, avg=2216.00, stdev=3292.17, samples=6 00:09:52.942 iops : min= 24, max= 1704, avg=554.00, stdev=823.04, samples=6 00:09:52.942 lat (usec) : 250=36.98%, 500=58.00%, 750=0.60% 00:09:52.942 lat (msec) : 20=0.06%, 50=4.30% 00:09:52.942 cpu : usr=0.37%, sys=0.40%, ctx=1677, majf=0, minf=1 00:09:52.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.942 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.942 issued rwts: total=1674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.942 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1179228: Tue Oct 8 18:17:11 2024 00:09:52.942 read: IOPS=3730, BW=14.6MiB/s (15.3MB/s)(55.5MiB/3806msec) 00:09:52.942 slat (usec): min=4, max=22717, avg=15.24, stdev=288.52 00:09:52.942 clat (usec): min=159, max=41320, avg=248.72, stdev=979.89 00:09:52.942 lat (usec): min=165, max=41358, avg=263.96, stdev=1021.72 00:09:52.942 clat percentiles (usec): 00:09:52.942 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 202], 00:09:52.942 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:09:52.942 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 260], 95.00th=[ 281], 00:09:52.942 | 99.00th=[ 355], 99.50th=[ 396], 99.90th=[ 1020], 99.95th=[41157], 00:09:52.942 | 99.99th=[41157] 00:09:52.942 bw ( KiB/s): min= 5072, max=17440, per=56.97%, avg=15009.71, stdev=4437.18, samples=7 00:09:52.942 iops : min= 1268, max= 4360, avg=3752.43, stdev=1109.29, samples=7 00:09:52.942 lat (usec) : 250=86.98%, 500=12.72%, 750=0.13%, 1000=0.04% 00:09:52.942 lat (msec) : 2=0.03%, 4=0.01%, 20=0.01%, 50=0.06% 00:09:52.942 cpu : usr=1.89%, sys=4.86%, ctx=14207, majf=0, minf=2 00:09:52.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.942 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.942 issued rwts: total=14199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.942 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1179229: Tue Oct 8 18:17:11 2024 00:09:52.942 read: IOPS=2842, BW=11.1MiB/s (11.6MB/s)(35.6MiB/3210msec) 00:09:52.942 slat (usec): min=5, max=13896, avg=15.35, stdev=145.47 00:09:52.942 clat (usec): min=183, max=41945, avg=330.50, stdev=1478.19 00:09:52.942 lat (usec): min=189, max=54948, avg=345.85, stdev=1526.58 00:09:52.942 clat percentiles (usec): 00:09:52.942 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 233], 00:09:52.942 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 273], 60.00th=[ 297], 00:09:52.942 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 347], 00:09:52.942 | 99.00th=[ 388], 99.50th=[ 449], 99.90th=[40633], 99.95th=[41157], 00:09:52.942 | 99.99th=[42206] 00:09:52.942 bw ( KiB/s): min= 8128, max=15048, per=46.13%, avg=12154.67, stdev=2315.36, samples=6 00:09:52.942 iops : min= 2032, max= 3762, avg=3038.67, stdev=578.84, samples=6 00:09:52.942 lat (usec) : 250=37.93%, 500=61.76%, 750=0.13%, 1000=0.02% 00:09:52.942 lat (msec) : 2=0.01%, 50=0.13% 00:09:52.942 cpu : usr=2.24%, sys=6.17%, ctx=9126, majf=0, minf=1 00:09:52.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.942 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.942 issued rwts: total=9124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.942 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1179230: Tue Oct 8 18:17:11 2024 00:09:52.942 read: IOPS=25, BW=99.5KiB/s (102kB/s)(292KiB/2935msec) 00:09:52.942 slat (nsec): min=13463, max=46522, avg=21867.82, stdev=8641.26 00:09:52.942 clat (usec): min=430, max=41308, avg=39861.43, stdev=6656.86 00:09:52.942 lat (usec): min=445, max=41324, avg=39883.40, stdev=6656.09 00:09:52.942 clat percentiles (usec): 00:09:52.942 | 1.00th=[ 433], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:52.942 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:52.942 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:52.942 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:52.942 | 99.99th=[41157] 00:09:52.942 bw ( KiB/s): min= 96, max= 104, per=0.38%, avg=100.80, stdev= 4.38, samples=5 00:09:52.942 iops : min= 24, max= 26, avg=25.20, stdev= 1.10, samples=5 00:09:52.942 lat (usec) : 500=1.35%, 750=1.35% 00:09:52.942 lat (msec) : 50=95.95% 00:09:52.942 cpu : usr=0.07%, sys=0.00%, ctx=74, majf=0, minf=2 00:09:52.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.942 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.942 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.942 00:09:52.942 Run status group 0 (all jobs): 00:09:52.942 READ: bw=25.7MiB/s (27.0MB/s), 99.5KiB/s-14.6MiB/s (102kB/s-15.3MB/s), io=97.9MiB (103MB), run=2935-3806msec 00:09:52.942 00:09:52.942 Disk stats (read/write): 00:09:52.942 nvme0n1: ios=1670/0, merge=0/0, ticks=3271/0, in_queue=3271, util=94.82% 00:09:52.942 nvme0n2: ios=13570/0, merge=0/0, ticks=3275/0, in_queue=3275, util=95.42% 00:09:52.942 nvme0n3: ios=9120/0, merge=0/0, ticks=2862/0, in_queue=2862, util=96.39% 00:09:52.942 nvme0n4: ios=71/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.75% 00:09:53.200 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.200 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:53.457 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.457 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:53.715 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.715 18:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:53.973 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.973 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1179139 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:54.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.231 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:54.488 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:54.488 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:54.488 nvmf hotplug test: fio failed as expected 00:09:54.488 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.746 rmmod nvme_tcp 00:09:54.746 rmmod nvme_fabrics 00:09:54.746 rmmod nvme_keyring 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1177106 ']' 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1177106 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1177106 ']' 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1177106 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1177106 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1177106' 00:09:54.746 killing process with pid 1177106 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1177106 00:09:54.746 18:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1177106 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.005 18:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.545 00:09:57.545 real 0m24.407s 00:09:57.545 user 1m25.253s 00:09:57.545 sys 0m7.268s 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.545 ************************************ 00:09:57.545 END TEST nvmf_fio_target 00:09:57.545 ************************************ 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.545 ************************************ 00:09:57.545 START TEST nvmf_bdevio 00:09:57.545 ************************************ 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:57.545 * Looking for test storage... 00:09:57.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:57.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.545 --rc genhtml_branch_coverage=1 00:09:57.545 --rc genhtml_function_coverage=1 00:09:57.545 --rc genhtml_legend=1 00:09:57.545 --rc geninfo_all_blocks=1 00:09:57.545 --rc geninfo_unexecuted_blocks=1 00:09:57.545 00:09:57.545 ' 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:57.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.545 --rc genhtml_branch_coverage=1 00:09:57.545 --rc genhtml_function_coverage=1 00:09:57.545 --rc genhtml_legend=1 00:09:57.545 --rc geninfo_all_blocks=1 00:09:57.545 --rc geninfo_unexecuted_blocks=1 00:09:57.545 00:09:57.545 ' 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:57.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.545 --rc genhtml_branch_coverage=1 00:09:57.545 --rc genhtml_function_coverage=1 00:09:57.545 --rc genhtml_legend=1 00:09:57.545 --rc geninfo_all_blocks=1 00:09:57.545 --rc geninfo_unexecuted_blocks=1 00:09:57.545 00:09:57.545 ' 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:57.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.545 --rc genhtml_branch_coverage=1 00:09:57.545 --rc genhtml_function_coverage=1 00:09:57.545 --rc genhtml_legend=1 00:09:57.545 --rc geninfo_all_blocks=1 00:09:57.545 --rc geninfo_unexecuted_blocks=1 00:09:57.545 00:09:57.545 ' 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:57.545 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.546 18:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:59.450 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:59.450 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:59.450 Found net devices under 0000:09:00.0: cvl_0_0 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:59.450 Found net devices under 0000:09:00.1: cvl_0_1 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.450 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.451 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.451 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.451 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:09:59.745 00:09:59.745 --- 10.0.0.2 ping statistics --- 00:09:59.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.745 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:09:59.745 00:09:59.745 --- 10.0.0.1 ping statistics --- 00:09:59.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.745 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:59.745 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1181995 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1181995 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1181995 ']' 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.746 18:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.746 [2024-10-08 18:17:17.847138] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:09:59.746 [2024-10-08 18:17:17.847217] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.746 [2024-10-08 18:17:17.912731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.027 [2024-10-08 18:17:18.023828] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.027 [2024-10-08 18:17:18.023877] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.027 [2024-10-08 18:17:18.023906] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.027 [2024-10-08 18:17:18.023917] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.027 [2024-10-08 18:17:18.023926] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.027 [2024-10-08 18:17:18.025624] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:00.027 [2024-10-08 18:17:18.025738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:10:00.027 [2024-10-08 18:17:18.025806] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:10:00.027 [2024-10-08 18:17:18.025809] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.027 [2024-10-08 18:17:18.202117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.027 Malloc0 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:00.027 [2024-10-08 18:17:18.256385] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:00.027 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:00.028 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:00.028 { 00:10:00.028 "params": { 00:10:00.028 "name": "Nvme$subsystem", 00:10:00.028 "trtype": "$TEST_TRANSPORT", 00:10:00.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.028 "adrfam": "ipv4", 00:10:00.028 "trsvcid": "$NVMF_PORT", 00:10:00.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.028 "hdgst": ${hdgst:-false}, 00:10:00.028 "ddgst": ${ddgst:-false} 00:10:00.028 }, 00:10:00.028 "method": "bdev_nvme_attach_controller" 00:10:00.028 } 00:10:00.028 EOF 00:10:00.028 )") 00:10:00.028 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:00.028 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:00.028 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:00.028 18:17:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:00.028 "params": { 00:10:00.028 "name": "Nvme1", 00:10:00.028 "trtype": "tcp", 00:10:00.028 "traddr": "10.0.0.2", 00:10:00.028 "adrfam": "ipv4", 00:10:00.028 "trsvcid": "4420", 00:10:00.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.028 "hdgst": false, 00:10:00.028 "ddgst": false 00:10:00.028 }, 00:10:00.028 "method": "bdev_nvme_attach_controller" 00:10:00.028 }' 00:10:00.028 [2024-10-08 18:17:18.307934] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:10:00.028 [2024-10-08 18:17:18.308002] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182028 ] 00:10:00.284 [2024-10-08 18:17:18.367835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.284 [2024-10-08 18:17:18.486125] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.284 [2024-10-08 18:17:18.486175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.284 [2024-10-08 18:17:18.486178] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.850 I/O targets: 00:10:00.850 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:00.850 00:10:00.850 00:10:00.850 CUnit - A unit testing framework for C - Version 2.1-3 00:10:00.850 http://cunit.sourceforge.net/ 00:10:00.850 00:10:00.850 00:10:00.850 Suite: bdevio tests on: Nvme1n1 00:10:00.850 Test: blockdev write read block ...passed 00:10:00.850 Test: blockdev write zeroes read block ...passed 00:10:00.850 Test: blockdev write zeroes read no split ...passed 00:10:00.850 Test: blockdev write zeroes read split ...passed 00:10:00.850 Test: blockdev write zeroes read split partial ...passed 00:10:00.850 Test: blockdev reset ...[2024-10-08 18:17:18.990545] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:00.850 [2024-10-08 18:17:18.990658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd2700 (9): Bad file descriptor 00:10:00.850 [2024-10-08 18:17:19.008655] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:00.850 passed 00:10:00.850 Test: blockdev write read 8 blocks ...passed 00:10:00.850 Test: blockdev write read size > 128k ...passed 00:10:00.850 Test: blockdev write read invalid size ...passed 00:10:00.850 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:00.850 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:00.850 Test: blockdev write read max offset ...passed 00:10:00.850 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:00.850 Test: blockdev writev readv 8 blocks ...passed 00:10:00.850 Test: blockdev writev readv 30 x 1block ...passed 00:10:01.108 Test: blockdev writev readv block ...passed 00:10:01.108 Test: blockdev writev readv size > 128k ...passed 00:10:01.108 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:01.108 Test: blockdev comparev and writev ...[2024-10-08 18:17:19.180335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.108 [2024-10-08 18:17:19.180370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:01.108 [2024-10-08 18:17:19.180395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.108 [2024-10-08 18:17:19.180413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:01.108 [2024-10-08 18:17:19.180721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.108 [2024-10-08 18:17:19.180747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:01.108 [2024-10-08 18:17:19.180769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.108 [2024-10-08 18:17:19.180795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:01.108 [2024-10-08 18:17:19.181109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.108 [2024-10-08 18:17:19.181134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:01.108 [2024-10-08 18:17:19.181155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.108 [2024-10-08 18:17:19.181172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:01.108 [2024-10-08 18:17:19.181487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.108 [2024-10-08 18:17:19.181511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:01.108 [2024-10-08 18:17:19.181533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:01.108 [2024-10-08 18:17:19.181549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:01.108 passed 00:10:01.108 Test: blockdev nvme passthru rw ...passed 00:10:01.108 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:17:19.263535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.108 [2024-10-08 18:17:19.263563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:01.108 [2024-10-08 18:17:19.263701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.108 [2024-10-08 18:17:19.263724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:01.108 [2024-10-08 18:17:19.263860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.108 [2024-10-08 18:17:19.263884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:01.108 [2024-10-08 18:17:19.264025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:01.108 [2024-10-08 18:17:19.264048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:01.108 passed 00:10:01.108 Test: blockdev nvme admin passthru ...passed 00:10:01.108 Test: blockdev copy ...passed 00:10:01.108 00:10:01.108 Run Summary: Type Total Ran Passed Failed Inactive 00:10:01.108 suites 1 1 n/a 0 0 00:10:01.108 tests 23 23 23 0 0 00:10:01.108 asserts 152 152 152 0 n/a 00:10:01.108 00:10:01.108 Elapsed time = 0.963 seconds 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.366 rmmod nvme_tcp 00:10:01.366 rmmod nvme_fabrics 00:10:01.366 rmmod nvme_keyring 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1181995 ']' 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1181995 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1181995 ']' 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1181995 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.366 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1181995 00:10:01.624 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:01.624 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:01.624 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1181995' 00:10:01.624 killing process with pid 1181995 00:10:01.624 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1181995 00:10:01.624 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1181995 00:10:01.882 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:01.882 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:01.882 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:01.882 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:01.882 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:01.882 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:01.882 18:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:01.882 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.882 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.882 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.882 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.882 18:17:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.787 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:03.787 00:10:03.787 real 0m6.778s 00:10:03.787 user 0m11.093s 00:10:03.787 sys 0m2.247s 00:10:03.787 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.787 18:17:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.787 ************************************ 00:10:03.787 END TEST nvmf_bdevio 00:10:03.787 ************************************ 00:10:03.787 18:17:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:03.787 00:10:03.787 real 3m59.363s 00:10:03.787 user 10m23.565s 00:10:03.787 sys 1m8.880s 00:10:03.787 18:17:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.787 18:17:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.787 ************************************ 00:10:03.787 END TEST nvmf_target_core 00:10:03.787 ************************************ 00:10:04.046 18:17:22 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.046 18:17:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:04.046 18:17:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.046 18:17:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.046 ************************************ 00:10:04.046 START TEST nvmf_target_extra 00:10:04.046 ************************************ 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:04.047 * Looking for test storage... 00:10:04.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:04.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.047 --rc genhtml_branch_coverage=1 00:10:04.047 --rc genhtml_function_coverage=1 00:10:04.047 --rc genhtml_legend=1 00:10:04.047 --rc geninfo_all_blocks=1 00:10:04.047 --rc geninfo_unexecuted_blocks=1 00:10:04.047 00:10:04.047 ' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:04.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.047 --rc genhtml_branch_coverage=1 00:10:04.047 --rc genhtml_function_coverage=1 00:10:04.047 --rc genhtml_legend=1 00:10:04.047 --rc geninfo_all_blocks=1 00:10:04.047 --rc geninfo_unexecuted_blocks=1 00:10:04.047 00:10:04.047 ' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:04.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.047 --rc genhtml_branch_coverage=1 00:10:04.047 --rc genhtml_function_coverage=1 00:10:04.047 --rc genhtml_legend=1 00:10:04.047 --rc geninfo_all_blocks=1 00:10:04.047 --rc geninfo_unexecuted_blocks=1 00:10:04.047 00:10:04.047 ' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:04.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.047 --rc genhtml_branch_coverage=1 00:10:04.047 --rc genhtml_function_coverage=1 00:10:04.047 --rc genhtml_legend=1 00:10:04.047 --rc geninfo_all_blocks=1 00:10:04.047 --rc geninfo_unexecuted_blocks=1 00:10:04.047 00:10:04.047 ' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.047 ************************************ 00:10:04.047 START TEST nvmf_example 00:10:04.047 ************************************ 00:10:04.047 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:04.307 * Looking for test storage... 00:10:04.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:04.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.307 --rc genhtml_branch_coverage=1 00:10:04.307 --rc genhtml_function_coverage=1 00:10:04.307 --rc genhtml_legend=1 00:10:04.307 --rc geninfo_all_blocks=1 00:10:04.307 --rc geninfo_unexecuted_blocks=1 00:10:04.307 00:10:04.307 ' 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:04.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.307 --rc genhtml_branch_coverage=1 00:10:04.307 --rc genhtml_function_coverage=1 00:10:04.307 --rc genhtml_legend=1 00:10:04.307 --rc geninfo_all_blocks=1 00:10:04.307 --rc geninfo_unexecuted_blocks=1 00:10:04.307 00:10:04.307 ' 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:04.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.307 --rc genhtml_branch_coverage=1 00:10:04.307 --rc genhtml_function_coverage=1 00:10:04.307 --rc genhtml_legend=1 00:10:04.307 --rc geninfo_all_blocks=1 00:10:04.307 --rc geninfo_unexecuted_blocks=1 00:10:04.307 00:10:04.307 ' 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:04.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.307 --rc genhtml_branch_coverage=1 00:10:04.307 --rc genhtml_function_coverage=1 00:10:04.307 --rc genhtml_legend=1 00:10:04.307 --rc geninfo_all_blocks=1 00:10:04.307 --rc geninfo_unexecuted_blocks=1 00:10:04.307 00:10:04.307 ' 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.307 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.308 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.841 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:06.841 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:06.842 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:06.842 Found net devices under 0000:09:00.0: cvl_0_0 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:06.842 Found net devices under 0000:09:00.1: cvl_0_1 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:10:06.842 00:10:06.842 --- 10.0.0.2 ping statistics --- 00:10:06.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.842 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:10:06.842 00:10:06.842 --- 10.0.0.1 ping statistics --- 00:10:06.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.842 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1184290 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1184290 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1184290 ']' 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.842 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:07.775 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:17.739 Initializing NVMe Controllers 00:10:17.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:17.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:17.739 Initialization complete. Launching workers. 00:10:17.739 ======================================================== 00:10:17.739 Latency(us) 00:10:17.739 Device Information : IOPS MiB/s Average min max 00:10:17.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14862.50 58.06 4306.71 871.42 16633.56 00:10:17.739 ======================================================== 00:10:17.739 Total : 14862.50 58.06 4306.71 871.42 16633.56 00:10:17.739 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.997 rmmod nvme_tcp 00:10:17.997 rmmod nvme_fabrics 00:10:17.997 rmmod nvme_keyring 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1184290 ']' 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1184290 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1184290 ']' 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1184290 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1184290 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1184290' 00:10:17.997 killing process with pid 1184290 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1184290 00:10:17.997 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1184290 00:10:18.255 nvmf threads initialize successfully 00:10:18.255 bdev subsystem init successfully 00:10:18.255 created a nvmf target service 00:10:18.255 create targets's poll groups done 00:10:18.255 all subsystems of target started 00:10:18.255 nvmf target is running 00:10:18.255 all subsystems of target stopped 00:10:18.255 destroy targets's poll groups done 00:10:18.255 destroyed the nvmf target service 00:10:18.255 bdev subsystem finish successfully 00:10:18.255 nvmf threads destroy successfully 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.255 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.800 00:10:20.800 real 0m16.177s 00:10:20.800 user 0m45.576s 00:10:20.800 sys 0m3.325s 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:20.800 ************************************ 00:10:20.800 END TEST nvmf_example 00:10:20.800 ************************************ 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:20.800 ************************************ 00:10:20.800 START TEST nvmf_filesystem 00:10:20.800 ************************************ 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:20.800 * Looking for test storage... 00:10:20.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.800 --rc genhtml_branch_coverage=1 00:10:20.800 --rc genhtml_function_coverage=1 00:10:20.800 --rc genhtml_legend=1 00:10:20.800 --rc geninfo_all_blocks=1 00:10:20.800 --rc geninfo_unexecuted_blocks=1 00:10:20.800 00:10:20.800 ' 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.800 --rc genhtml_branch_coverage=1 00:10:20.800 --rc genhtml_function_coverage=1 00:10:20.800 --rc genhtml_legend=1 00:10:20.800 --rc geninfo_all_blocks=1 00:10:20.800 --rc geninfo_unexecuted_blocks=1 00:10:20.800 00:10:20.800 ' 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.800 --rc genhtml_branch_coverage=1 00:10:20.800 --rc genhtml_function_coverage=1 00:10:20.800 --rc genhtml_legend=1 00:10:20.800 --rc geninfo_all_blocks=1 00:10:20.800 --rc geninfo_unexecuted_blocks=1 00:10:20.800 00:10:20.800 ' 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:20.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.800 --rc genhtml_branch_coverage=1 00:10:20.800 --rc genhtml_function_coverage=1 00:10:20.800 --rc genhtml_legend=1 00:10:20.800 --rc geninfo_all_blocks=1 00:10:20.800 --rc geninfo_unexecuted_blocks=1 00:10:20.800 00:10:20.800 ' 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:20.800 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:20.801 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:20.802 #define SPDK_CONFIG_H 00:10:20.802 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:20.802 #define SPDK_CONFIG_APPS 1 00:10:20.802 #define SPDK_CONFIG_ARCH native 00:10:20.802 #undef SPDK_CONFIG_ASAN 00:10:20.802 #undef SPDK_CONFIG_AVAHI 00:10:20.802 #undef SPDK_CONFIG_CET 00:10:20.802 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:20.802 #define SPDK_CONFIG_COVERAGE 1 00:10:20.802 #define SPDK_CONFIG_CROSS_PREFIX 00:10:20.802 #undef SPDK_CONFIG_CRYPTO 00:10:20.802 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:20.802 #undef SPDK_CONFIG_CUSTOMOCF 00:10:20.802 #undef SPDK_CONFIG_DAOS 00:10:20.802 #define SPDK_CONFIG_DAOS_DIR 00:10:20.802 #define SPDK_CONFIG_DEBUG 1 00:10:20.802 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:20.802 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:20.802 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:20.802 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:20.802 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:20.802 #undef SPDK_CONFIG_DPDK_UADK 00:10:20.802 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:20.802 #define SPDK_CONFIG_EXAMPLES 1 00:10:20.802 #undef SPDK_CONFIG_FC 00:10:20.802 #define SPDK_CONFIG_FC_PATH 00:10:20.802 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:20.802 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:20.802 #define SPDK_CONFIG_FSDEV 1 00:10:20.802 #undef SPDK_CONFIG_FUSE 00:10:20.802 #undef SPDK_CONFIG_FUZZER 00:10:20.802 #define SPDK_CONFIG_FUZZER_LIB 00:10:20.802 #undef SPDK_CONFIG_GOLANG 00:10:20.802 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:20.802 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:20.802 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:20.802 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:20.802 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:20.802 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:20.802 #undef SPDK_CONFIG_HAVE_LZ4 00:10:20.802 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:20.802 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:20.802 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:20.802 #define SPDK_CONFIG_IDXD 1 00:10:20.802 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:20.802 #undef SPDK_CONFIG_IPSEC_MB 00:10:20.802 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:20.802 #define SPDK_CONFIG_ISAL 1 00:10:20.802 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:20.802 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:20.802 #define SPDK_CONFIG_LIBDIR 00:10:20.802 #undef SPDK_CONFIG_LTO 00:10:20.802 #define SPDK_CONFIG_MAX_LCORES 128 00:10:20.802 #define SPDK_CONFIG_NVME_CUSE 1 00:10:20.802 #undef SPDK_CONFIG_OCF 00:10:20.802 #define SPDK_CONFIG_OCF_PATH 00:10:20.802 #define SPDK_CONFIG_OPENSSL_PATH 00:10:20.802 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:20.802 #define SPDK_CONFIG_PGO_DIR 00:10:20.802 #undef SPDK_CONFIG_PGO_USE 00:10:20.802 #define SPDK_CONFIG_PREFIX /usr/local 00:10:20.802 #undef SPDK_CONFIG_RAID5F 00:10:20.802 #undef SPDK_CONFIG_RBD 00:10:20.802 #define SPDK_CONFIG_RDMA 1 00:10:20.802 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:20.802 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:20.802 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:20.802 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:20.802 #define SPDK_CONFIG_SHARED 1 00:10:20.802 #undef SPDK_CONFIG_SMA 00:10:20.802 #define SPDK_CONFIG_TESTS 1 00:10:20.802 #undef SPDK_CONFIG_TSAN 00:10:20.802 #define SPDK_CONFIG_UBLK 1 00:10:20.802 #define SPDK_CONFIG_UBSAN 1 00:10:20.802 #undef SPDK_CONFIG_UNIT_TESTS 00:10:20.802 #undef SPDK_CONFIG_URING 00:10:20.802 #define SPDK_CONFIG_URING_PATH 00:10:20.802 #undef SPDK_CONFIG_URING_ZNS 00:10:20.802 #undef SPDK_CONFIG_USDT 00:10:20.802 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:20.802 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:20.802 #define SPDK_CONFIG_VFIO_USER 1 00:10:20.802 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:20.802 #define SPDK_CONFIG_VHOST 1 00:10:20.802 #define SPDK_CONFIG_VIRTIO 1 00:10:20.802 #undef SPDK_CONFIG_VTUNE 00:10:20.802 #define SPDK_CONFIG_VTUNE_DIR 00:10:20.802 #define SPDK_CONFIG_WERROR 1 00:10:20.802 #define SPDK_CONFIG_WPDK_DIR 00:10:20.802 #undef SPDK_CONFIG_XNVME 00:10:20.802 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:20.802 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.803 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1185993 ]] 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1185993 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:20.804 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.YmYRPy 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.YmYRPy/tests/target /tmp/spdk.YmYRPy 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=661032960 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4623396864 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=51307364352 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988536320 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10681171968 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982901760 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994268160 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375265280 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22441984 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=29920051200 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994268160 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1074216960 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198841344 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198853632 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:20.805 * Looking for test storage... 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=51307364352 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=12895764480 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:20.805 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:20.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.806 --rc genhtml_branch_coverage=1 00:10:20.806 --rc genhtml_function_coverage=1 00:10:20.806 --rc genhtml_legend=1 00:10:20.806 --rc geninfo_all_blocks=1 00:10:20.806 --rc geninfo_unexecuted_blocks=1 00:10:20.806 00:10:20.806 ' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:20.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.806 --rc genhtml_branch_coverage=1 00:10:20.806 --rc genhtml_function_coverage=1 00:10:20.806 --rc genhtml_legend=1 00:10:20.806 --rc geninfo_all_blocks=1 00:10:20.806 --rc geninfo_unexecuted_blocks=1 00:10:20.806 00:10:20.806 ' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:20.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.806 --rc genhtml_branch_coverage=1 00:10:20.806 --rc genhtml_function_coverage=1 00:10:20.806 --rc genhtml_legend=1 00:10:20.806 --rc geninfo_all_blocks=1 00:10:20.806 --rc geninfo_unexecuted_blocks=1 00:10:20.806 00:10:20.806 ' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:20.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.806 --rc genhtml_branch_coverage=1 00:10:20.806 --rc genhtml_function_coverage=1 00:10:20.806 --rc genhtml_legend=1 00:10:20.806 --rc geninfo_all_blocks=1 00:10:20.806 --rc geninfo_unexecuted_blocks=1 00:10:20.806 00:10:20.806 ' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.806 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:20.807 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:20.807 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.807 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:23.337 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:23.337 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:23.337 Found net devices under 0000:09:00.0: cvl_0_0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:23.337 Found net devices under 0000:09:00.1: cvl_0_1 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:23.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:10:23.337 00:10:23.337 --- 10.0.0.2 ping statistics --- 00:10:23.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.337 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:23.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:10:23.337 00:10:23.337 --- 10.0.0.1 ping statistics --- 00:10:23.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.337 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:23.337 ************************************ 00:10:23.337 START TEST nvmf_filesystem_no_in_capsule 00:10:23.337 ************************************ 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1187643 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1187643 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1187643 ']' 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.337 [2024-10-08 18:17:41.276297] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:10:23.337 [2024-10-08 18:17:41.276401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.337 [2024-10-08 18:17:41.343741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.337 [2024-10-08 18:17:41.455146] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.337 [2024-10-08 18:17:41.455207] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.337 [2024-10-08 18:17:41.455220] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.337 [2024-10-08 18:17:41.455231] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.337 [2024-10-08 18:17:41.455240] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.337 [2024-10-08 18:17:41.456973] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.337 [2024-10-08 18:17:41.457041] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.337 [2024-10-08 18:17:41.457109] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.337 [2024-10-08 18:17:41.457112] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:23.337 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.338 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.338 [2024-10-08 18:17:41.618994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.595 Malloc1 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.595 [2024-10-08 18:17:41.814884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.595 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:23.595 { 00:10:23.595 "name": "Malloc1", 00:10:23.595 "aliases": [ 00:10:23.595 "11748abf-8ee3-4e9c-8d57-1debf1c4c9c2" 00:10:23.595 ], 00:10:23.595 "product_name": "Malloc disk", 00:10:23.595 "block_size": 512, 00:10:23.595 "num_blocks": 1048576, 00:10:23.595 "uuid": "11748abf-8ee3-4e9c-8d57-1debf1c4c9c2", 00:10:23.595 "assigned_rate_limits": { 00:10:23.595 "rw_ios_per_sec": 0, 00:10:23.595 "rw_mbytes_per_sec": 0, 00:10:23.595 "r_mbytes_per_sec": 0, 00:10:23.596 "w_mbytes_per_sec": 0 00:10:23.596 }, 00:10:23.596 "claimed": true, 00:10:23.596 "claim_type": "exclusive_write", 00:10:23.596 "zoned": false, 00:10:23.596 "supported_io_types": { 00:10:23.596 "read": true, 00:10:23.596 "write": true, 00:10:23.596 "unmap": true, 00:10:23.596 "flush": true, 00:10:23.596 "reset": true, 00:10:23.596 "nvme_admin": false, 00:10:23.596 "nvme_io": false, 00:10:23.596 "nvme_io_md": false, 00:10:23.596 "write_zeroes": true, 00:10:23.596 "zcopy": true, 00:10:23.596 "get_zone_info": false, 00:10:23.596 "zone_management": false, 00:10:23.596 "zone_append": false, 00:10:23.596 "compare": false, 00:10:23.596 "compare_and_write": false, 00:10:23.596 "abort": true, 00:10:23.596 "seek_hole": false, 00:10:23.596 "seek_data": false, 00:10:23.596 "copy": true, 00:10:23.596 "nvme_iov_md": false 00:10:23.596 }, 00:10:23.596 "memory_domains": [ 00:10:23.596 { 00:10:23.596 "dma_device_id": "system", 00:10:23.596 "dma_device_type": 1 00:10:23.596 }, 00:10:23.596 { 00:10:23.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.596 "dma_device_type": 2 00:10:23.596 } 00:10:23.596 ], 00:10:23.596 "driver_specific": {} 00:10:23.596 } 00:10:23.596 ]' 00:10:23.596 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:23.596 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:23.596 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:23.854 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:23.854 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:23.854 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:23.854 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:23.854 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:24.419 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:24.419 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:24.419 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.419 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:24.419 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.317 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.317 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:26.318 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:26.576 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:26.834 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.207 ************************************ 00:10:28.207 START TEST filesystem_ext4 00:10:28.207 ************************************ 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:28.207 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:28.207 mke2fs 1.47.0 (5-Feb-2023) 00:10:28.207 Discarding device blocks: 0/522240 done 00:10:28.207 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:28.207 Filesystem UUID: 157abaae-d5da-40e1-b1ae-d162b963b295 00:10:28.207 Superblock backups stored on blocks: 00:10:28.207 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:28.207 00:10:28.207 Allocating group tables: 0/64 done 00:10:28.207 Writing inode tables: 0/64 done 00:10:29.579 Creating journal (8192 blocks): done 00:10:30.402 Writing superblocks and filesystem accounting information: 0/64 done 00:10:30.402 00:10:30.402 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:30.403 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:35.662 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:35.920 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:35.920 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:35.920 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:35.920 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:35.920 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1187643 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:35.920 00:10:35.920 real 0m7.888s 00:10:35.920 user 0m0.016s 00:10:35.920 sys 0m0.063s 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:35.920 ************************************ 00:10:35.920 END TEST filesystem_ext4 00:10:35.920 ************************************ 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.920 ************************************ 00:10:35.920 START TEST filesystem_btrfs 00:10:35.920 ************************************ 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:35.920 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:36.178 btrfs-progs v6.8.1 00:10:36.178 See https://btrfs.readthedocs.io for more information. 00:10:36.178 00:10:36.178 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:36.178 NOTE: several default settings have changed in version 5.15, please make sure 00:10:36.178 this does not affect your deployments: 00:10:36.178 - DUP for metadata (-m dup) 00:10:36.178 - enabled no-holes (-O no-holes) 00:10:36.178 - enabled free-space-tree (-R free-space-tree) 00:10:36.178 00:10:36.178 Label: (null) 00:10:36.178 UUID: c38a1984-8157-40a9-a025-412cc9a58872 00:10:36.178 Node size: 16384 00:10:36.178 Sector size: 4096 (CPU page size: 4096) 00:10:36.178 Filesystem size: 510.00MiB 00:10:36.178 Block group profiles: 00:10:36.178 Data: single 8.00MiB 00:10:36.178 Metadata: DUP 32.00MiB 00:10:36.178 System: DUP 8.00MiB 00:10:36.178 SSD detected: yes 00:10:36.178 Zoned device: no 00:10:36.178 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:36.178 Checksum: crc32c 00:10:36.178 Number of devices: 1 00:10:36.178 Devices: 00:10:36.178 ID SIZE PATH 00:10:36.178 1 510.00MiB /dev/nvme0n1p1 00:10:36.178 00:10:36.178 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:36.178 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1187643 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:36.436 00:10:36.436 real 0m0.500s 00:10:36.436 user 0m0.012s 00:10:36.436 sys 0m0.105s 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:36.436 ************************************ 00:10:36.436 END TEST filesystem_btrfs 00:10:36.436 ************************************ 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.436 ************************************ 00:10:36.436 START TEST filesystem_xfs 00:10:36.436 ************************************ 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:36.436 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:36.437 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:36.437 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:10:36.437 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:36.437 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:36.437 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:36.437 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:36.437 = sectsz=512 attr=2, projid32bit=1 00:10:36.437 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:36.437 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:36.437 data = bsize=4096 blocks=130560, imaxpct=25 00:10:36.437 = sunit=0 swidth=0 blks 00:10:36.437 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:36.437 log =internal log bsize=4096 blocks=16384, version=2 00:10:36.437 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:36.437 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:37.881 Discarding blocks...Done. 00:10:37.881 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:37.881 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:39.253 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:39.253 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1187643 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:39.511 00:10:39.511 real 0m2.994s 00:10:39.511 user 0m0.013s 00:10:39.511 sys 0m0.070s 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:39.511 ************************************ 00:10:39.511 END TEST filesystem_xfs 00:10:39.511 ************************************ 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1187643 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1187643 ']' 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1187643 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1187643 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1187643' 00:10:39.511 killing process with pid 1187643 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1187643 00:10:39.511 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1187643 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:40.076 00:10:40.076 real 0m16.997s 00:10:40.076 user 1m5.480s 00:10:40.076 sys 0m2.163s 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.076 ************************************ 00:10:40.076 END TEST nvmf_filesystem_no_in_capsule 00:10:40.076 ************************************ 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:40.076 ************************************ 00:10:40.076 START TEST nvmf_filesystem_in_capsule 00:10:40.076 ************************************ 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1189875 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1189875 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1189875 ']' 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.076 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.077 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.077 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.077 [2024-10-08 18:17:58.329191] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:10:40.077 [2024-10-08 18:17:58.329276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.334 [2024-10-08 18:17:58.397454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.334 [2024-10-08 18:17:58.508356] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.334 [2024-10-08 18:17:58.508419] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.334 [2024-10-08 18:17:58.508433] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.334 [2024-10-08 18:17:58.508445] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.334 [2024-10-08 18:17:58.508454] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.334 [2024-10-08 18:17:58.510131] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.334 [2024-10-08 18:17:58.510194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.334 [2024-10-08 18:17:58.510260] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.334 [2024-10-08 18:17:58.510263] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.593 [2024-10-08 18:17:58.673066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.593 Malloc1 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.593 [2024-10-08 18:17:58.852118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:40.593 { 00:10:40.593 "name": "Malloc1", 00:10:40.593 "aliases": [ 00:10:40.593 "e1c4b931-4d82-40a2-ab7c-43e1cbf6e339" 00:10:40.593 ], 00:10:40.593 "product_name": "Malloc disk", 00:10:40.593 "block_size": 512, 00:10:40.593 "num_blocks": 1048576, 00:10:40.593 "uuid": "e1c4b931-4d82-40a2-ab7c-43e1cbf6e339", 00:10:40.593 "assigned_rate_limits": { 00:10:40.593 "rw_ios_per_sec": 0, 00:10:40.593 "rw_mbytes_per_sec": 0, 00:10:40.593 "r_mbytes_per_sec": 0, 00:10:40.593 "w_mbytes_per_sec": 0 00:10:40.593 }, 00:10:40.593 "claimed": true, 00:10:40.593 "claim_type": "exclusive_write", 00:10:40.593 "zoned": false, 00:10:40.593 "supported_io_types": { 00:10:40.593 "read": true, 00:10:40.593 "write": true, 00:10:40.593 "unmap": true, 00:10:40.593 "flush": true, 00:10:40.593 "reset": true, 00:10:40.593 "nvme_admin": false, 00:10:40.593 "nvme_io": false, 00:10:40.593 "nvme_io_md": false, 00:10:40.593 "write_zeroes": true, 00:10:40.593 "zcopy": true, 00:10:40.593 "get_zone_info": false, 00:10:40.593 "zone_management": false, 00:10:40.593 "zone_append": false, 00:10:40.593 "compare": false, 00:10:40.593 "compare_and_write": false, 00:10:40.593 "abort": true, 00:10:40.593 "seek_hole": false, 00:10:40.593 "seek_data": false, 00:10:40.593 "copy": true, 00:10:40.593 "nvme_iov_md": false 00:10:40.593 }, 00:10:40.593 "memory_domains": [ 00:10:40.593 { 00:10:40.593 "dma_device_id": "system", 00:10:40.593 "dma_device_type": 1 00:10:40.593 }, 00:10:40.593 { 00:10:40.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.593 "dma_device_type": 2 00:10:40.593 } 00:10:40.593 ], 00:10:40.593 "driver_specific": {} 00:10:40.593 } 00:10:40.593 ]' 00:10:40.593 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:40.851 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:40.851 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:40.851 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:40.851 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:40.851 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:40.851 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:40.851 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.417 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.417 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:41.417 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.417 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:41.417 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:43.969 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:43.969 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:44.534 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:45.466 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:45.466 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:45.466 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:45.466 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.466 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.725 ************************************ 00:10:45.725 START TEST filesystem_in_capsule_ext4 00:10:45.725 ************************************ 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:45.725 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:45.725 mke2fs 1.47.0 (5-Feb-2023) 00:10:45.725 Discarding device blocks: 0/522240 done 00:10:45.725 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:45.725 Filesystem UUID: 3c4c848d-4a9d-4943-a0a3-544c63b0236d 00:10:45.725 Superblock backups stored on blocks: 00:10:45.725 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:45.725 00:10:45.725 Allocating group tables: 0/64 done 00:10:45.725 Writing inode tables: 0/64 done 00:10:45.725 Creating journal (8192 blocks): done 00:10:45.725 Writing superblocks and filesystem accounting information: 0/64 done 00:10:45.725 00:10:45.725 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:45.725 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1189875 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.279 00:10:52.279 real 0m6.200s 00:10:52.279 user 0m0.030s 00:10:52.279 sys 0m0.051s 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:52.279 ************************************ 00:10:52.279 END TEST filesystem_in_capsule_ext4 00:10:52.279 ************************************ 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.279 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.279 ************************************ 00:10:52.279 START TEST filesystem_in_capsule_btrfs 00:10:52.279 ************************************ 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:52.279 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:52.279 btrfs-progs v6.8.1 00:10:52.279 See https://btrfs.readthedocs.io for more information. 00:10:52.279 00:10:52.279 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:52.279 NOTE: several default settings have changed in version 5.15, please make sure 00:10:52.279 this does not affect your deployments: 00:10:52.279 - DUP for metadata (-m dup) 00:10:52.279 - enabled no-holes (-O no-holes) 00:10:52.279 - enabled free-space-tree (-R free-space-tree) 00:10:52.279 00:10:52.279 Label: (null) 00:10:52.279 UUID: b5829b98-eabd-422d-9a5e-03dbae73fdf3 00:10:52.280 Node size: 16384 00:10:52.280 Sector size: 4096 (CPU page size: 4096) 00:10:52.280 Filesystem size: 510.00MiB 00:10:52.280 Block group profiles: 00:10:52.280 Data: single 8.00MiB 00:10:52.280 Metadata: DUP 32.00MiB 00:10:52.280 System: DUP 8.00MiB 00:10:52.280 SSD detected: yes 00:10:52.280 Zoned device: no 00:10:52.280 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:52.280 Checksum: crc32c 00:10:52.280 Number of devices: 1 00:10:52.280 Devices: 00:10:52.280 ID SIZE PATH 00:10:52.280 1 510.00MiB /dev/nvme0n1p1 00:10:52.280 00:10:52.280 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:52.280 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:52.845 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:52.845 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:52.845 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:52.845 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:52.845 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1189875 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:52.846 00:10:52.846 real 0m0.908s 00:10:52.846 user 0m0.020s 00:10:52.846 sys 0m0.096s 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:52.846 ************************************ 00:10:52.846 END TEST filesystem_in_capsule_btrfs 00:10:52.846 ************************************ 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:52.846 ************************************ 00:10:52.846 START TEST filesystem_in_capsule_xfs 00:10:52.846 ************************************ 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:52.846 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:52.846 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:52.846 = sectsz=512 attr=2, projid32bit=1 00:10:52.846 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:52.846 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:52.846 data = bsize=4096 blocks=130560, imaxpct=25 00:10:52.846 = sunit=0 swidth=0 blks 00:10:52.846 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:52.846 log =internal log bsize=4096 blocks=16384, version=2 00:10:52.846 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:52.846 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:53.778 Discarding blocks...Done. 00:10:53.778 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:53.778 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1189875 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:56.303 00:10:56.303 real 0m3.415s 00:10:56.303 user 0m0.019s 00:10:56.303 sys 0m0.061s 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:56.303 ************************************ 00:10:56.303 END TEST filesystem_in_capsule_xfs 00:10:56.303 ************************************ 00:10:56.303 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:56.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1189875 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1189875 ']' 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1189875 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1189875 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1189875' 00:10:56.561 killing process with pid 1189875 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1189875 00:10:56.561 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1189875 00:10:57.127 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:57.127 00:10:57.127 real 0m17.022s 00:10:57.127 user 1m5.597s 00:10:57.127 sys 0m2.148s 00:10:57.127 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.127 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:57.127 ************************************ 00:10:57.127 END TEST nvmf_filesystem_in_capsule 00:10:57.127 ************************************ 00:10:57.127 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:57.127 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:57.127 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:57.127 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.127 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:57.127 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.127 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.127 rmmod nvme_tcp 00:10:57.127 rmmod nvme_fabrics 00:10:57.128 rmmod nvme_keyring 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.128 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:59.664 00:10:59.664 real 0m38.891s 00:10:59.664 user 2m12.177s 00:10:59.664 sys 0m6.111s 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.664 ************************************ 00:10:59.664 END TEST nvmf_filesystem 00:10:59.664 ************************************ 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.664 ************************************ 00:10:59.664 START TEST nvmf_target_discovery 00:10:59.664 ************************************ 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:59.664 * Looking for test storage... 00:10:59.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:59.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.664 --rc genhtml_branch_coverage=1 00:10:59.664 --rc genhtml_function_coverage=1 00:10:59.664 --rc genhtml_legend=1 00:10:59.664 --rc geninfo_all_blocks=1 00:10:59.664 --rc geninfo_unexecuted_blocks=1 00:10:59.664 00:10:59.664 ' 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:59.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.664 --rc genhtml_branch_coverage=1 00:10:59.664 --rc genhtml_function_coverage=1 00:10:59.664 --rc genhtml_legend=1 00:10:59.664 --rc geninfo_all_blocks=1 00:10:59.664 --rc geninfo_unexecuted_blocks=1 00:10:59.664 00:10:59.664 ' 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:59.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.664 --rc genhtml_branch_coverage=1 00:10:59.664 --rc genhtml_function_coverage=1 00:10:59.664 --rc genhtml_legend=1 00:10:59.664 --rc geninfo_all_blocks=1 00:10:59.664 --rc geninfo_unexecuted_blocks=1 00:10:59.664 00:10:59.664 ' 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:59.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.664 --rc genhtml_branch_coverage=1 00:10:59.664 --rc genhtml_function_coverage=1 00:10:59.664 --rc genhtml_legend=1 00:10:59.664 --rc geninfo_all_blocks=1 00:10:59.664 --rc geninfo_unexecuted_blocks=1 00:10:59.664 00:10:59.664 ' 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.664 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:59.665 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.564 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.564 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:01.823 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:01.823 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:01.823 Found net devices under 0000:09:00.0: cvl_0_0 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:01.823 Found net devices under 0000:09:00.1: cvl_0_1 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.823 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.824 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.824 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:11:01.824 00:11:01.824 --- 10.0.0.2 ping statistics --- 00:11:01.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.824 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:11:01.824 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:11:01.824 00:11:01.824 --- 10.0.0.1 ping statistics --- 00:11:01.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.824 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:01.824 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.824 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:01.824 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:01.824 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.824 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:01.824 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:01.824 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1194642 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1194642 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1194642 ']' 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.824 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:01.824 [2024-10-08 18:18:20.075233] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:11:01.824 [2024-10-08 18:18:20.075377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.082 [2024-10-08 18:18:20.142444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.082 [2024-10-08 18:18:20.255896] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.082 [2024-10-08 18:18:20.255958] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.082 [2024-10-08 18:18:20.255980] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.082 [2024-10-08 18:18:20.255991] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.082 [2024-10-08 18:18:20.256015] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.082 [2024-10-08 18:18:20.257821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.082 [2024-10-08 18:18:20.257928] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.082 [2024-10-08 18:18:20.258001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.082 [2024-10-08 18:18:20.258004] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 [2024-10-08 18:18:20.419310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 Null1 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 [2024-10-08 18:18:20.459675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 Null2 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 Null3 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 Null4 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.341 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:11:02.599 00:11:02.599 Discovery Log Number of Records 6, Generation counter 6 00:11:02.599 =====Discovery Log Entry 0====== 00:11:02.599 trtype: tcp 00:11:02.599 adrfam: ipv4 00:11:02.599 subtype: current discovery subsystem 00:11:02.599 treq: not required 00:11:02.599 portid: 0 00:11:02.599 trsvcid: 4420 00:11:02.599 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:02.599 traddr: 10.0.0.2 00:11:02.599 eflags: explicit discovery connections, duplicate discovery information 00:11:02.599 sectype: none 00:11:02.599 =====Discovery Log Entry 1====== 00:11:02.599 trtype: tcp 00:11:02.599 adrfam: ipv4 00:11:02.599 subtype: nvme subsystem 00:11:02.599 treq: not required 00:11:02.599 portid: 0 00:11:02.599 trsvcid: 4420 00:11:02.599 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:02.599 traddr: 10.0.0.2 00:11:02.599 eflags: none 00:11:02.599 sectype: none 00:11:02.599 =====Discovery Log Entry 2====== 00:11:02.599 trtype: tcp 00:11:02.599 adrfam: ipv4 00:11:02.599 subtype: nvme subsystem 00:11:02.599 treq: not required 00:11:02.599 portid: 0 00:11:02.599 trsvcid: 4420 00:11:02.599 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:02.599 traddr: 10.0.0.2 00:11:02.599 eflags: none 00:11:02.599 sectype: none 00:11:02.599 =====Discovery Log Entry 3====== 00:11:02.599 trtype: tcp 00:11:02.599 adrfam: ipv4 00:11:02.599 subtype: nvme subsystem 00:11:02.599 treq: not required 00:11:02.599 portid: 0 00:11:02.599 trsvcid: 4420 00:11:02.599 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:02.599 traddr: 10.0.0.2 00:11:02.599 eflags: none 00:11:02.599 sectype: none 00:11:02.599 =====Discovery Log Entry 4====== 00:11:02.599 trtype: tcp 00:11:02.599 adrfam: ipv4 00:11:02.599 subtype: nvme subsystem 00:11:02.599 treq: not required 00:11:02.599 portid: 0 00:11:02.599 trsvcid: 4420 00:11:02.599 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:02.599 traddr: 10.0.0.2 00:11:02.599 eflags: none 00:11:02.599 sectype: none 00:11:02.599 =====Discovery Log Entry 5====== 00:11:02.599 trtype: tcp 00:11:02.599 adrfam: ipv4 00:11:02.599 subtype: discovery subsystem referral 00:11:02.599 treq: not required 00:11:02.599 portid: 0 00:11:02.599 trsvcid: 4430 00:11:02.599 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:02.599 traddr: 10.0.0.2 00:11:02.599 eflags: none 00:11:02.599 sectype: none 00:11:02.599 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:02.599 Perform nvmf subsystem discovery via RPC 00:11:02.599 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:02.599 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.599 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.599 [ 00:11:02.599 { 00:11:02.599 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:02.599 "subtype": "Discovery", 00:11:02.599 "listen_addresses": [ 00:11:02.599 { 00:11:02.599 "trtype": "TCP", 00:11:02.599 "adrfam": "IPv4", 00:11:02.599 "traddr": "10.0.0.2", 00:11:02.599 "trsvcid": "4420" 00:11:02.599 } 00:11:02.599 ], 00:11:02.599 "allow_any_host": true, 00:11:02.599 "hosts": [] 00:11:02.599 }, 00:11:02.599 { 00:11:02.599 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:02.599 "subtype": "NVMe", 00:11:02.599 "listen_addresses": [ 00:11:02.599 { 00:11:02.599 "trtype": "TCP", 00:11:02.599 "adrfam": "IPv4", 00:11:02.599 "traddr": "10.0.0.2", 00:11:02.599 "trsvcid": "4420" 00:11:02.599 } 00:11:02.599 ], 00:11:02.599 "allow_any_host": true, 00:11:02.599 "hosts": [], 00:11:02.599 "serial_number": "SPDK00000000000001", 00:11:02.599 "model_number": "SPDK bdev Controller", 00:11:02.599 "max_namespaces": 32, 00:11:02.599 "min_cntlid": 1, 00:11:02.599 "max_cntlid": 65519, 00:11:02.599 "namespaces": [ 00:11:02.599 { 00:11:02.599 "nsid": 1, 00:11:02.599 "bdev_name": "Null1", 00:11:02.599 "name": "Null1", 00:11:02.599 "nguid": "57D6B1E799784B6CB96D1CB892235772", 00:11:02.599 "uuid": "57d6b1e7-9978-4b6c-b96d-1cb892235772" 00:11:02.599 } 00:11:02.599 ] 00:11:02.599 }, 00:11:02.599 { 00:11:02.599 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:02.599 "subtype": "NVMe", 00:11:02.599 "listen_addresses": [ 00:11:02.599 { 00:11:02.599 "trtype": "TCP", 00:11:02.599 "adrfam": "IPv4", 00:11:02.599 "traddr": "10.0.0.2", 00:11:02.599 "trsvcid": "4420" 00:11:02.599 } 00:11:02.599 ], 00:11:02.599 "allow_any_host": true, 00:11:02.599 "hosts": [], 00:11:02.599 "serial_number": "SPDK00000000000002", 00:11:02.599 "model_number": "SPDK bdev Controller", 00:11:02.599 "max_namespaces": 32, 00:11:02.599 "min_cntlid": 1, 00:11:02.599 "max_cntlid": 65519, 00:11:02.599 "namespaces": [ 00:11:02.599 { 00:11:02.599 "nsid": 1, 00:11:02.599 "bdev_name": "Null2", 00:11:02.599 "name": "Null2", 00:11:02.599 "nguid": "7578B82404E3473E9A9DF710D96AEC03", 00:11:02.599 "uuid": "7578b824-04e3-473e-9a9d-f710d96aec03" 00:11:02.599 } 00:11:02.599 ] 00:11:02.599 }, 00:11:02.599 { 00:11:02.599 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:02.599 "subtype": "NVMe", 00:11:02.599 "listen_addresses": [ 00:11:02.599 { 00:11:02.599 "trtype": "TCP", 00:11:02.599 "adrfam": "IPv4", 00:11:02.599 "traddr": "10.0.0.2", 00:11:02.599 "trsvcid": "4420" 00:11:02.599 } 00:11:02.599 ], 00:11:02.599 "allow_any_host": true, 00:11:02.599 "hosts": [], 00:11:02.599 "serial_number": "SPDK00000000000003", 00:11:02.599 "model_number": "SPDK bdev Controller", 00:11:02.600 "max_namespaces": 32, 00:11:02.600 "min_cntlid": 1, 00:11:02.600 "max_cntlid": 65519, 00:11:02.600 "namespaces": [ 00:11:02.600 { 00:11:02.600 "nsid": 1, 00:11:02.600 "bdev_name": "Null3", 00:11:02.600 "name": "Null3", 00:11:02.600 "nguid": "170766178512403FA99B6ADEBC241CA2", 00:11:02.600 "uuid": "17076617-8512-403f-a99b-6adebc241ca2" 00:11:02.600 } 00:11:02.600 ] 00:11:02.600 }, 00:11:02.600 { 00:11:02.600 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:02.600 "subtype": "NVMe", 00:11:02.600 "listen_addresses": [ 00:11:02.600 { 00:11:02.600 "trtype": "TCP", 00:11:02.600 "adrfam": "IPv4", 00:11:02.600 "traddr": "10.0.0.2", 00:11:02.600 "trsvcid": "4420" 00:11:02.600 } 00:11:02.600 ], 00:11:02.600 "allow_any_host": true, 00:11:02.600 "hosts": [], 00:11:02.600 "serial_number": "SPDK00000000000004", 00:11:02.600 "model_number": "SPDK bdev Controller", 00:11:02.600 "max_namespaces": 32, 00:11:02.600 "min_cntlid": 1, 00:11:02.600 "max_cntlid": 65519, 00:11:02.600 "namespaces": [ 00:11:02.600 { 00:11:02.600 "nsid": 1, 00:11:02.600 "bdev_name": "Null4", 00:11:02.600 "name": "Null4", 00:11:02.600 "nguid": "24AE08BD1A2E4D0F941A0E5D9359F85C", 00:11:02.600 "uuid": "24ae08bd-1a2e-4d0f-941a-0e5d9359f85c" 00:11:02.600 } 00:11:02.600 ] 00:11:02.600 } 00:11:02.600 ] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:02.600 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.858 rmmod nvme_tcp 00:11:02.858 rmmod nvme_fabrics 00:11:02.858 rmmod nvme_keyring 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1194642 ']' 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1194642 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1194642 ']' 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1194642 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1194642 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1194642' 00:11:02.858 killing process with pid 1194642 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1194642 00:11:02.858 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1194642 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.116 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:05.652 00:11:05.652 real 0m5.865s 00:11:05.652 user 0m4.891s 00:11:05.652 sys 0m2.074s 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:05.652 ************************************ 00:11:05.652 END TEST nvmf_target_discovery 00:11:05.652 ************************************ 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:05.652 ************************************ 00:11:05.652 START TEST nvmf_referrals 00:11:05.652 ************************************ 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:05.652 * Looking for test storage... 00:11:05.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:05.652 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:05.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.653 --rc genhtml_branch_coverage=1 00:11:05.653 --rc genhtml_function_coverage=1 00:11:05.653 --rc genhtml_legend=1 00:11:05.653 --rc geninfo_all_blocks=1 00:11:05.653 --rc geninfo_unexecuted_blocks=1 00:11:05.653 00:11:05.653 ' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:05.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.653 --rc genhtml_branch_coverage=1 00:11:05.653 --rc genhtml_function_coverage=1 00:11:05.653 --rc genhtml_legend=1 00:11:05.653 --rc geninfo_all_blocks=1 00:11:05.653 --rc geninfo_unexecuted_blocks=1 00:11:05.653 00:11:05.653 ' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:05.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.653 --rc genhtml_branch_coverage=1 00:11:05.653 --rc genhtml_function_coverage=1 00:11:05.653 --rc genhtml_legend=1 00:11:05.653 --rc geninfo_all_blocks=1 00:11:05.653 --rc geninfo_unexecuted_blocks=1 00:11:05.653 00:11:05.653 ' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:05.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.653 --rc genhtml_branch_coverage=1 00:11:05.653 --rc genhtml_function_coverage=1 00:11:05.653 --rc genhtml_legend=1 00:11:05.653 --rc geninfo_all_blocks=1 00:11:05.653 --rc geninfo_unexecuted_blocks=1 00:11:05.653 00:11:05.653 ' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:05.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:05.653 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:07.557 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:07.557 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:07.557 Found net devices under 0000:09:00.0: cvl_0_0 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:07.557 Found net devices under 0000:09:00.1: cvl_0_1 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:07.557 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:07.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:11:07.558 00:11:07.558 --- 10.0.0.2 ping statistics --- 00:11:07.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.558 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:11:07.558 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:11:07.816 00:11:07.816 --- 10.0.0.1 ping statistics --- 00:11:07.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.816 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1196744 00:11:07.816 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1196744 00:11:07.817 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.817 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1196744 ']' 00:11:07.817 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.817 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.817 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.817 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.817 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:07.817 [2024-10-08 18:18:25.924888] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:11:07.817 [2024-10-08 18:18:25.924975] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.817 [2024-10-08 18:18:25.993620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.817 [2024-10-08 18:18:26.102702] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.817 [2024-10-08 18:18:26.102757] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.817 [2024-10-08 18:18:26.102772] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.817 [2024-10-08 18:18:26.102785] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.817 [2024-10-08 18:18:26.102795] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.088 [2024-10-08 18:18:26.104544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.088 [2024-10-08 18:18:26.104571] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.088 [2024-10-08 18:18:26.104599] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.088 [2024-10-08 18:18:26.104602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 [2024-10-08 18:18:26.263962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 [2024-10-08 18:18:26.276218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:08.088 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.347 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:08.606 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:08.863 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:08.864 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:08.864 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:08.864 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:08.864 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:08.864 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:08.864 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:09.121 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:09.379 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:09.637 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:09.637 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:09.637 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:09.637 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:09.637 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:09.637 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:09.894 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.176 rmmod nvme_tcp 00:11:10.176 rmmod nvme_fabrics 00:11:10.176 rmmod nvme_keyring 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1196744 ']' 00:11:10.176 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1196744 00:11:10.177 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1196744 ']' 00:11:10.177 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1196744 00:11:10.177 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:10.177 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.177 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1196744 00:11:10.177 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.177 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.177 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1196744' 00:11:10.177 killing process with pid 1196744 00:11:10.177 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1196744 00:11:10.177 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1196744 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.466 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.370 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.370 00:11:12.370 real 0m7.226s 00:11:12.370 user 0m11.235s 00:11:12.370 sys 0m2.347s 00:11:12.370 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.370 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:12.370 ************************************ 00:11:12.370 END TEST nvmf_referrals 00:11:12.370 ************************************ 00:11:12.370 18:18:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:12.370 18:18:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:12.370 18:18:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.370 18:18:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.629 ************************************ 00:11:12.629 START TEST nvmf_connect_disconnect 00:11:12.629 ************************************ 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:12.629 * Looking for test storage... 00:11:12.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.629 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:12.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.629 --rc genhtml_branch_coverage=1 00:11:12.629 --rc genhtml_function_coverage=1 00:11:12.629 --rc genhtml_legend=1 00:11:12.629 --rc geninfo_all_blocks=1 00:11:12.629 --rc geninfo_unexecuted_blocks=1 00:11:12.629 00:11:12.629 ' 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:12.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.630 --rc genhtml_branch_coverage=1 00:11:12.630 --rc genhtml_function_coverage=1 00:11:12.630 --rc genhtml_legend=1 00:11:12.630 --rc geninfo_all_blocks=1 00:11:12.630 --rc geninfo_unexecuted_blocks=1 00:11:12.630 00:11:12.630 ' 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:12.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.630 --rc genhtml_branch_coverage=1 00:11:12.630 --rc genhtml_function_coverage=1 00:11:12.630 --rc genhtml_legend=1 00:11:12.630 --rc geninfo_all_blocks=1 00:11:12.630 --rc geninfo_unexecuted_blocks=1 00:11:12.630 00:11:12.630 ' 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:12.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.630 --rc genhtml_branch_coverage=1 00:11:12.630 --rc genhtml_function_coverage=1 00:11:12.630 --rc genhtml_legend=1 00:11:12.630 --rc geninfo_all_blocks=1 00:11:12.630 --rc geninfo_unexecuted_blocks=1 00:11:12.630 00:11:12.630 ' 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.630 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:15.168 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:15.168 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:15.168 Found net devices under 0000:09:00.0: cvl_0_0 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:15.168 Found net devices under 0000:09:00.1: cvl_0_1 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.168 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:11:15.168 00:11:15.168 --- 10.0.0.2 ping statistics --- 00:11:15.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.169 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:11:15.169 00:11:15.169 --- 10.0.0.1 ping statistics --- 00:11:15.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.169 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1199172 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1199172 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1199172 ']' 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.169 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.169 [2024-10-08 18:18:33.269099] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:11:15.169 [2024-10-08 18:18:33.269171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.169 [2024-10-08 18:18:33.334078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.169 [2024-10-08 18:18:33.440087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.169 [2024-10-08 18:18:33.440144] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.169 [2024-10-08 18:18:33.440173] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.169 [2024-10-08 18:18:33.440184] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.169 [2024-10-08 18:18:33.440194] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.169 [2024-10-08 18:18:33.441888] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.169 [2024-10-08 18:18:33.441954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.169 [2024-10-08 18:18:33.441975] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.169 [2024-10-08 18:18:33.441978] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 [2024-10-08 18:18:33.607110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.427 [2024-10-08 18:18:33.669204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:15.427 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:18.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.561 rmmod nvme_tcp 00:11:29.561 rmmod nvme_fabrics 00:11:29.561 rmmod nvme_keyring 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1199172 ']' 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1199172 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1199172 ']' 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1199172 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1199172 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1199172' 00:11:29.561 killing process with pid 1199172 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1199172 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1199172 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.561 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.102 00:11:32.102 real 0m19.222s 00:11:32.102 user 0m57.088s 00:11:32.102 sys 0m3.532s 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:32.102 ************************************ 00:11:32.102 END TEST nvmf_connect_disconnect 00:11:32.102 ************************************ 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.102 ************************************ 00:11:32.102 START TEST nvmf_multitarget 00:11:32.102 ************************************ 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:32.102 * Looking for test storage... 00:11:32.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:11:32.102 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.102 --rc genhtml_branch_coverage=1 00:11:32.102 --rc genhtml_function_coverage=1 00:11:32.102 --rc genhtml_legend=1 00:11:32.102 --rc geninfo_all_blocks=1 00:11:32.102 --rc geninfo_unexecuted_blocks=1 00:11:32.102 00:11:32.102 ' 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.102 --rc genhtml_branch_coverage=1 00:11:32.102 --rc genhtml_function_coverage=1 00:11:32.102 --rc genhtml_legend=1 00:11:32.102 --rc geninfo_all_blocks=1 00:11:32.102 --rc geninfo_unexecuted_blocks=1 00:11:32.102 00:11:32.102 ' 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.102 --rc genhtml_branch_coverage=1 00:11:32.102 --rc genhtml_function_coverage=1 00:11:32.102 --rc genhtml_legend=1 00:11:32.102 --rc geninfo_all_blocks=1 00:11:32.102 --rc geninfo_unexecuted_blocks=1 00:11:32.102 00:11:32.102 ' 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:32.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.102 --rc genhtml_branch_coverage=1 00:11:32.102 --rc genhtml_function_coverage=1 00:11:32.102 --rc genhtml_legend=1 00:11:32.102 --rc geninfo_all_blocks=1 00:11:32.102 --rc geninfo_unexecuted_blocks=1 00:11:32.102 00:11:32.102 ' 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.102 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.103 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:34.005 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.005 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:34.264 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.264 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:34.265 Found net devices under 0000:09:00.0: cvl_0_0 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:34.265 Found net devices under 0000:09:00.1: cvl_0_1 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:34.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:11:34.265 00:11:34.265 --- 10.0.0.2 ping statistics --- 00:11:34.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.265 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:11:34.265 00:11:34.265 --- 10.0.0.1 ping statistics --- 00:11:34.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.265 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1202932 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1202932 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1202932 ']' 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.265 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:34.265 [2024-10-08 18:18:52.500605] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:11:34.265 [2024-10-08 18:18:52.500703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.523 [2024-10-08 18:18:52.564789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.523 [2024-10-08 18:18:52.666090] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.523 [2024-10-08 18:18:52.666148] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.523 [2024-10-08 18:18:52.666176] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.523 [2024-10-08 18:18:52.666187] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.523 [2024-10-08 18:18:52.666197] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.523 [2024-10-08 18:18:52.667902] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.523 [2024-10-08 18:18:52.668009] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.523 [2024-10-08 18:18:52.668117] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.523 [2024-10-08 18:18:52.668120] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.523 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.523 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:11:34.523 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:34.523 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.523 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:34.781 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.781 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:34.781 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:34.781 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:34.781 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:34.781 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:35.039 "nvmf_tgt_1" 00:11:35.039 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:35.039 "nvmf_tgt_2" 00:11:35.039 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:35.039 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:35.296 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:35.296 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:35.296 true 00:11:35.296 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:35.296 true 00:11:35.296 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:35.297 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:35.554 rmmod nvme_tcp 00:11:35.554 rmmod nvme_fabrics 00:11:35.554 rmmod nvme_keyring 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1202932 ']' 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1202932 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1202932 ']' 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1202932 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1202932 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1202932' 00:11:35.554 killing process with pid 1202932 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1202932 00:11:35.554 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1202932 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.813 18:18:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.349 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:38.349 00:11:38.349 real 0m6.186s 00:11:38.349 user 0m7.128s 00:11:38.349 sys 0m2.139s 00:11:38.349 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.349 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:38.349 ************************************ 00:11:38.349 END TEST nvmf_multitarget 00:11:38.349 ************************************ 00:11:38.349 18:18:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:38.349 18:18:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:38.349 18:18:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.349 18:18:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:38.349 ************************************ 00:11:38.349 START TEST nvmf_rpc 00:11:38.350 ************************************ 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:38.350 * Looking for test storage... 00:11:38.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.350 --rc genhtml_branch_coverage=1 00:11:38.350 --rc genhtml_function_coverage=1 00:11:38.350 --rc genhtml_legend=1 00:11:38.350 --rc geninfo_all_blocks=1 00:11:38.350 --rc geninfo_unexecuted_blocks=1 00:11:38.350 00:11:38.350 ' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.350 --rc genhtml_branch_coverage=1 00:11:38.350 --rc genhtml_function_coverage=1 00:11:38.350 --rc genhtml_legend=1 00:11:38.350 --rc geninfo_all_blocks=1 00:11:38.350 --rc geninfo_unexecuted_blocks=1 00:11:38.350 00:11:38.350 ' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.350 --rc genhtml_branch_coverage=1 00:11:38.350 --rc genhtml_function_coverage=1 00:11:38.350 --rc genhtml_legend=1 00:11:38.350 --rc geninfo_all_blocks=1 00:11:38.350 --rc geninfo_unexecuted_blocks=1 00:11:38.350 00:11:38.350 ' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:38.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.350 --rc genhtml_branch_coverage=1 00:11:38.350 --rc genhtml_function_coverage=1 00:11:38.350 --rc genhtml_legend=1 00:11:38.350 --rc geninfo_all_blocks=1 00:11:38.350 --rc geninfo_unexecuted_blocks=1 00:11:38.350 00:11:38.350 ' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:38.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.350 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.351 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:38.351 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:38.351 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:38.351 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:40.887 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:40.887 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:40.887 Found net devices under 0000:09:00.0: cvl_0_0 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:40.887 Found net devices under 0000:09:00.1: cvl_0_1 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:40.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:11:40.887 00:11:40.887 --- 10.0.0.2 ping statistics --- 00:11:40.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.887 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:11:40.887 00:11:40.887 --- 10.0.0.1 ping statistics --- 00:11:40.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.887 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:11:40.887 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1205045 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1205045 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1205045 ']' 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:40.888 18:18:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.888 [2024-10-08 18:18:58.754367] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:11:40.888 [2024-10-08 18:18:58.754463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.888 [2024-10-08 18:18:58.818887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.888 [2024-10-08 18:18:58.930178] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.888 [2024-10-08 18:18:58.930230] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.888 [2024-10-08 18:18:58.930265] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.888 [2024-10-08 18:18:58.930277] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.888 [2024-10-08 18:18:58.930293] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.888 [2024-10-08 18:18:58.932081] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.888 [2024-10-08 18:18:58.932146] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.888 [2024-10-08 18:18:58.932213] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.888 [2024-10-08 18:18:58.932216] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:40.888 "tick_rate": 2700000000, 00:11:40.888 "poll_groups": [ 00:11:40.888 { 00:11:40.888 "name": "nvmf_tgt_poll_group_000", 00:11:40.888 "admin_qpairs": 0, 00:11:40.888 "io_qpairs": 0, 00:11:40.888 "current_admin_qpairs": 0, 00:11:40.888 "current_io_qpairs": 0, 00:11:40.888 "pending_bdev_io": 0, 00:11:40.888 "completed_nvme_io": 0, 00:11:40.888 "transports": [] 00:11:40.888 }, 00:11:40.888 { 00:11:40.888 "name": "nvmf_tgt_poll_group_001", 00:11:40.888 "admin_qpairs": 0, 00:11:40.888 "io_qpairs": 0, 00:11:40.888 "current_admin_qpairs": 0, 00:11:40.888 "current_io_qpairs": 0, 00:11:40.888 "pending_bdev_io": 0, 00:11:40.888 "completed_nvme_io": 0, 00:11:40.888 "transports": [] 00:11:40.888 }, 00:11:40.888 { 00:11:40.888 "name": "nvmf_tgt_poll_group_002", 00:11:40.888 "admin_qpairs": 0, 00:11:40.888 "io_qpairs": 0, 00:11:40.888 "current_admin_qpairs": 0, 00:11:40.888 "current_io_qpairs": 0, 00:11:40.888 "pending_bdev_io": 0, 00:11:40.888 "completed_nvme_io": 0, 00:11:40.888 "transports": [] 00:11:40.888 }, 00:11:40.888 { 00:11:40.888 "name": "nvmf_tgt_poll_group_003", 00:11:40.888 "admin_qpairs": 0, 00:11:40.888 "io_qpairs": 0, 00:11:40.888 "current_admin_qpairs": 0, 00:11:40.888 "current_io_qpairs": 0, 00:11:40.888 "pending_bdev_io": 0, 00:11:40.888 "completed_nvme_io": 0, 00:11:40.888 "transports": [] 00:11:40.888 } 00:11:40.888 ] 00:11:40.888 }' 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:40.888 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 [2024-10-08 18:18:59.192139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:41.147 "tick_rate": 2700000000, 00:11:41.147 "poll_groups": [ 00:11:41.147 { 00:11:41.147 "name": "nvmf_tgt_poll_group_000", 00:11:41.147 "admin_qpairs": 0, 00:11:41.147 "io_qpairs": 0, 00:11:41.147 "current_admin_qpairs": 0, 00:11:41.147 "current_io_qpairs": 0, 00:11:41.147 "pending_bdev_io": 0, 00:11:41.147 "completed_nvme_io": 0, 00:11:41.147 "transports": [ 00:11:41.147 { 00:11:41.147 "trtype": "TCP" 00:11:41.147 } 00:11:41.147 ] 00:11:41.147 }, 00:11:41.147 { 00:11:41.147 "name": "nvmf_tgt_poll_group_001", 00:11:41.147 "admin_qpairs": 0, 00:11:41.147 "io_qpairs": 0, 00:11:41.147 "current_admin_qpairs": 0, 00:11:41.147 "current_io_qpairs": 0, 00:11:41.147 "pending_bdev_io": 0, 00:11:41.147 "completed_nvme_io": 0, 00:11:41.147 "transports": [ 00:11:41.147 { 00:11:41.147 "trtype": "TCP" 00:11:41.147 } 00:11:41.147 ] 00:11:41.147 }, 00:11:41.147 { 00:11:41.147 "name": "nvmf_tgt_poll_group_002", 00:11:41.147 "admin_qpairs": 0, 00:11:41.147 "io_qpairs": 0, 00:11:41.147 "current_admin_qpairs": 0, 00:11:41.147 "current_io_qpairs": 0, 00:11:41.147 "pending_bdev_io": 0, 00:11:41.147 "completed_nvme_io": 0, 00:11:41.147 "transports": [ 00:11:41.147 { 00:11:41.147 "trtype": "TCP" 00:11:41.147 } 00:11:41.147 ] 00:11:41.147 }, 00:11:41.147 { 00:11:41.147 "name": "nvmf_tgt_poll_group_003", 00:11:41.147 "admin_qpairs": 0, 00:11:41.147 "io_qpairs": 0, 00:11:41.147 "current_admin_qpairs": 0, 00:11:41.147 "current_io_qpairs": 0, 00:11:41.147 "pending_bdev_io": 0, 00:11:41.147 "completed_nvme_io": 0, 00:11:41.147 "transports": [ 00:11:41.147 { 00:11:41.147 "trtype": "TCP" 00:11:41.147 } 00:11:41.147 ] 00:11:41.147 } 00:11:41.147 ] 00:11:41.147 }' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 Malloc1 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 [2024-10-08 18:18:59.345496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:11:41.147 [2024-10-08 18:18:59.368173] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:41.147 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:41.147 could not add new controller: failed to write to nvme-fabrics device 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.147 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.081 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.081 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:42.081 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.081 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:42.081 18:19:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.987 [2024-10-08 18:19:02.168302] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:11:43.987 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:43.987 could not add new controller: failed to write to nvme-fabrics device 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.987 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:44.994 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:44.994 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:44.994 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.994 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:44.994 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:46.905 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.905 [2024-10-08 18:19:05.036668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.905 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:47.475 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.475 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:47.475 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.475 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:47.475 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.006 [2024-10-08 18:19:07.857850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.006 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.268 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.268 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:50.268 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.268 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:50.268 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.804 [2024-10-08 18:19:10.646489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.804 18:19:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.372 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.372 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:53.372 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.372 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:53.372 18:19:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.275 [2024-10-08 18:19:13.472494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.275 18:19:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.843 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.843 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:55.843 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.843 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:55.843 18:19:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.376 [2024-10-08 18:19:16.247911] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.376 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.635 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:58.635 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:58.635 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.635 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:58.635 18:19:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 [2024-10-08 18:19:19.015960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 [2024-10-08 18:19:19.064035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.172 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 [2024-10-08 18:19:19.112199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 [2024-10-08 18:19:19.160382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 [2024-10-08 18:19:19.208536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:01.173 "tick_rate": 2700000000, 00:12:01.173 "poll_groups": [ 00:12:01.173 { 00:12:01.173 "name": "nvmf_tgt_poll_group_000", 00:12:01.173 "admin_qpairs": 2, 00:12:01.173 "io_qpairs": 84, 00:12:01.173 "current_admin_qpairs": 0, 00:12:01.173 "current_io_qpairs": 0, 00:12:01.173 "pending_bdev_io": 0, 00:12:01.173 "completed_nvme_io": 183, 00:12:01.173 "transports": [ 00:12:01.173 { 00:12:01.173 "trtype": "TCP" 00:12:01.173 } 00:12:01.173 ] 00:12:01.173 }, 00:12:01.173 { 00:12:01.173 "name": "nvmf_tgt_poll_group_001", 00:12:01.173 "admin_qpairs": 2, 00:12:01.173 "io_qpairs": 84, 00:12:01.173 "current_admin_qpairs": 0, 00:12:01.173 "current_io_qpairs": 0, 00:12:01.173 "pending_bdev_io": 0, 00:12:01.173 "completed_nvme_io": 182, 00:12:01.173 "transports": [ 00:12:01.173 { 00:12:01.173 "trtype": "TCP" 00:12:01.173 } 00:12:01.173 ] 00:12:01.173 }, 00:12:01.173 { 00:12:01.173 "name": "nvmf_tgt_poll_group_002", 00:12:01.173 "admin_qpairs": 1, 00:12:01.173 "io_qpairs": 84, 00:12:01.173 "current_admin_qpairs": 0, 00:12:01.173 "current_io_qpairs": 0, 00:12:01.173 "pending_bdev_io": 0, 00:12:01.173 "completed_nvme_io": 136, 00:12:01.173 "transports": [ 00:12:01.173 { 00:12:01.173 "trtype": "TCP" 00:12:01.173 } 00:12:01.173 ] 00:12:01.173 }, 00:12:01.173 { 00:12:01.173 "name": "nvmf_tgt_poll_group_003", 00:12:01.173 "admin_qpairs": 2, 00:12:01.173 "io_qpairs": 84, 00:12:01.173 "current_admin_qpairs": 0, 00:12:01.173 "current_io_qpairs": 0, 00:12:01.173 "pending_bdev_io": 0, 00:12:01.173 "completed_nvme_io": 185, 00:12:01.173 "transports": [ 00:12:01.173 { 00:12:01.173 "trtype": "TCP" 00:12:01.173 } 00:12:01.173 ] 00:12:01.173 } 00:12:01.173 ] 00:12:01.173 }' 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:01.173 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:01.174 rmmod nvme_tcp 00:12:01.174 rmmod nvme_fabrics 00:12:01.174 rmmod nvme_keyring 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1205045 ']' 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1205045 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1205045 ']' 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1205045 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1205045 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1205045' 00:12:01.174 killing process with pid 1205045 00:12:01.174 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1205045 00:12:01.432 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1205045 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.692 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.605 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.605 00:12:03.605 real 0m25.640s 00:12:03.605 user 1m22.523s 00:12:03.605 sys 0m4.327s 00:12:03.605 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.605 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.605 ************************************ 00:12:03.605 END TEST nvmf_rpc 00:12:03.605 ************************************ 00:12:03.605 18:19:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:03.605 18:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:03.605 18:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.605 18:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.605 ************************************ 00:12:03.605 START TEST nvmf_invalid 00:12:03.605 ************************************ 00:12:03.605 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:03.865 * Looking for test storage... 00:12:03.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.865 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:03.865 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:12:03.865 18:19:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:03.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.865 --rc genhtml_branch_coverage=1 00:12:03.865 --rc genhtml_function_coverage=1 00:12:03.865 --rc genhtml_legend=1 00:12:03.865 --rc geninfo_all_blocks=1 00:12:03.865 --rc geninfo_unexecuted_blocks=1 00:12:03.865 00:12:03.865 ' 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:03.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.865 --rc genhtml_branch_coverage=1 00:12:03.865 --rc genhtml_function_coverage=1 00:12:03.865 --rc genhtml_legend=1 00:12:03.865 --rc geninfo_all_blocks=1 00:12:03.865 --rc geninfo_unexecuted_blocks=1 00:12:03.865 00:12:03.865 ' 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:03.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.865 --rc genhtml_branch_coverage=1 00:12:03.865 --rc genhtml_function_coverage=1 00:12:03.865 --rc genhtml_legend=1 00:12:03.865 --rc geninfo_all_blocks=1 00:12:03.865 --rc geninfo_unexecuted_blocks=1 00:12:03.865 00:12:03.865 ' 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:03.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.865 --rc genhtml_branch_coverage=1 00:12:03.865 --rc genhtml_function_coverage=1 00:12:03.865 --rc genhtml_legend=1 00:12:03.865 --rc geninfo_all_blocks=1 00:12:03.865 --rc geninfo_unexecuted_blocks=1 00:12:03.865 00:12:03.865 ' 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.865 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.866 18:19:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.399 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:06.400 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:06.400 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:06.400 Found net devices under 0000:09:00.0: cvl_0_0 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:06.400 Found net devices under 0000:09:00.1: cvl_0_1 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:06.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:12:06.400 00:12:06.400 --- 10.0.0.2 ping statistics --- 00:12:06.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.400 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:06.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:12:06.400 00:12:06.400 --- 10.0.0.1 ping statistics --- 00:12:06.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.400 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1209670 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1209670 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1209670 ']' 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:06.400 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.401 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:06.401 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:06.401 [2024-10-08 18:19:24.470818] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:12:06.401 [2024-10-08 18:19:24.470900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.401 [2024-10-08 18:19:24.534483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.401 [2024-10-08 18:19:24.639946] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.401 [2024-10-08 18:19:24.640002] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.401 [2024-10-08 18:19:24.640029] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.401 [2024-10-08 18:19:24.640040] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.401 [2024-10-08 18:19:24.640050] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.401 [2024-10-08 18:19:24.641739] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.401 [2024-10-08 18:19:24.641802] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.401 [2024-10-08 18:19:24.641870] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.401 [2024-10-08 18:19:24.641873] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.659 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.659 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:12:06.659 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:06.659 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:06.659 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:06.659 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.659 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:06.659 18:19:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29874 00:12:06.916 [2024-10-08 18:19:25.098420] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:06.916 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:06.916 { 00:12:06.916 "nqn": "nqn.2016-06.io.spdk:cnode29874", 00:12:06.916 "tgt_name": "foobar", 00:12:06.916 "method": "nvmf_create_subsystem", 00:12:06.916 "req_id": 1 00:12:06.917 } 00:12:06.917 Got JSON-RPC error response 00:12:06.917 response: 00:12:06.917 { 00:12:06.917 "code": -32603, 00:12:06.917 "message": "Unable to find target foobar" 00:12:06.917 }' 00:12:06.917 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:06.917 { 00:12:06.917 "nqn": "nqn.2016-06.io.spdk:cnode29874", 00:12:06.917 "tgt_name": "foobar", 00:12:06.917 "method": "nvmf_create_subsystem", 00:12:06.917 "req_id": 1 00:12:06.917 } 00:12:06.917 Got JSON-RPC error response 00:12:06.917 response: 00:12:06.917 { 00:12:06.917 "code": -32603, 00:12:06.917 "message": "Unable to find target foobar" 00:12:06.917 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:06.917 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:06.917 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22596 00:12:07.175 [2024-10-08 18:19:25.423532] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22596: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:07.175 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:07.175 { 00:12:07.175 "nqn": "nqn.2016-06.io.spdk:cnode22596", 00:12:07.175 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:07.175 "method": "nvmf_create_subsystem", 00:12:07.175 "req_id": 1 00:12:07.175 } 00:12:07.175 Got JSON-RPC error response 00:12:07.175 response: 00:12:07.175 { 00:12:07.175 "code": -32602, 00:12:07.175 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:07.175 }' 00:12:07.175 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:07.175 { 00:12:07.175 "nqn": "nqn.2016-06.io.spdk:cnode22596", 00:12:07.175 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:07.175 "method": "nvmf_create_subsystem", 00:12:07.175 "req_id": 1 00:12:07.175 } 00:12:07.175 Got JSON-RPC error response 00:12:07.175 response: 00:12:07.175 { 00:12:07.175 "code": -32602, 00:12:07.175 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:07.175 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:07.175 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:07.175 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26938 00:12:07.740 [2024-10-08 18:19:25.724579] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26938: invalid model number 'SPDK_Controller' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:07.740 { 00:12:07.740 "nqn": "nqn.2016-06.io.spdk:cnode26938", 00:12:07.740 "model_number": "SPDK_Controller\u001f", 00:12:07.740 "method": "nvmf_create_subsystem", 00:12:07.740 "req_id": 1 00:12:07.740 } 00:12:07.740 Got JSON-RPC error response 00:12:07.740 response: 00:12:07.740 { 00:12:07.740 "code": -32602, 00:12:07.740 "message": "Invalid MN SPDK_Controller\u001f" 00:12:07.740 }' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:07.740 { 00:12:07.740 "nqn": "nqn.2016-06.io.spdk:cnode26938", 00:12:07.740 "model_number": "SPDK_Controller\u001f", 00:12:07.740 "method": "nvmf_create_subsystem", 00:12:07.740 "req_id": 1 00:12:07.740 } 00:12:07.740 Got JSON-RPC error response 00:12:07.740 response: 00:12:07.740 { 00:12:07.740 "code": -32602, 00:12:07.740 "message": "Invalid MN SPDK_Controller\u001f" 00:12:07.740 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:07.740 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'nH?\%gPzR:N9c("cxuVy`' 00:12:07.741 18:19:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'nH?\%gPzR:N9c("cxuVy`' nqn.2016-06.io.spdk:cnode17377 00:12:08.000 [2024-10-08 18:19:26.057696] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17377: invalid serial number 'nH?\%gPzR:N9c("cxuVy`' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:08.000 { 00:12:08.000 "nqn": "nqn.2016-06.io.spdk:cnode17377", 00:12:08.000 "serial_number": "nH?\\%gPzR:N9c(\"cxuVy`", 00:12:08.000 "method": "nvmf_create_subsystem", 00:12:08.000 "req_id": 1 00:12:08.000 } 00:12:08.000 Got JSON-RPC error response 00:12:08.000 response: 00:12:08.000 { 00:12:08.000 "code": -32602, 00:12:08.000 "message": "Invalid SN nH?\\%gPzR:N9c(\"cxuVy`" 00:12:08.000 }' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:08.000 { 00:12:08.000 "nqn": "nqn.2016-06.io.spdk:cnode17377", 00:12:08.000 "serial_number": "nH?\\%gPzR:N9c(\"cxuVy`", 00:12:08.000 "method": "nvmf_create_subsystem", 00:12:08.000 "req_id": 1 00:12:08.000 } 00:12:08.000 Got JSON-RPC error response 00:12:08.000 response: 00:12:08.000 { 00:12:08.000 "code": -32602, 00:12:08.000 "message": "Invalid SN nH?\\%gPzR:N9c(\"cxuVy`" 00:12:08.000 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:08.000 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:08.001 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'NQ+RAPyh:O)$Q(EHm; $&1$`,v6R(~o~^K/{x/`qJ' 00:12:08.002 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'NQ+RAPyh:O)$Q(EHm; $&1$`,v6R(~o~^K/{x/`qJ' nqn.2016-06.io.spdk:cnode17454 00:12:08.259 [2024-10-08 18:19:26.483161] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17454: invalid model number 'NQ+RAPyh:O)$Q(EHm; $&1$`,v6R(~o~^K/{x/`qJ' 00:12:08.259 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:08.259 { 00:12:08.259 "nqn": "nqn.2016-06.io.spdk:cnode17454", 00:12:08.259 "model_number": "NQ+RAPyh:O)$Q(EHm; $&1$`,v6R(~o~^K/{x/`qJ", 00:12:08.259 "method": "nvmf_create_subsystem", 00:12:08.259 "req_id": 1 00:12:08.259 } 00:12:08.259 Got JSON-RPC error response 00:12:08.259 response: 00:12:08.259 { 00:12:08.259 "code": -32602, 00:12:08.259 "message": "Invalid MN NQ+RAPyh:O)$Q(EHm; $&1$`,v6R(~o~^K/{x/`qJ" 00:12:08.259 }' 00:12:08.259 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:08.259 { 00:12:08.259 "nqn": "nqn.2016-06.io.spdk:cnode17454", 00:12:08.259 "model_number": "NQ+RAPyh:O)$Q(EHm; $&1$`,v6R(~o~^K/{x/`qJ", 00:12:08.259 "method": "nvmf_create_subsystem", 00:12:08.259 "req_id": 1 00:12:08.259 } 00:12:08.259 Got JSON-RPC error response 00:12:08.259 response: 00:12:08.259 { 00:12:08.259 "code": -32602, 00:12:08.259 "message": "Invalid MN NQ+RAPyh:O)$Q(EHm; $&1$`,v6R(~o~^K/{x/`qJ" 00:12:08.259 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:08.259 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:08.517 [2024-10-08 18:19:26.760179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.517 18:19:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:09.082 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:09.082 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:09.082 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:09.082 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:09.082 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:09.082 [2024-10-08 18:19:27.314062] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:09.082 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:09.082 { 00:12:09.082 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:09.082 "listen_address": { 00:12:09.082 "trtype": "tcp", 00:12:09.082 "traddr": "", 00:12:09.082 "trsvcid": "4421" 00:12:09.082 }, 00:12:09.082 "method": "nvmf_subsystem_remove_listener", 00:12:09.082 "req_id": 1 00:12:09.082 } 00:12:09.082 Got JSON-RPC error response 00:12:09.082 response: 00:12:09.082 { 00:12:09.082 "code": -32602, 00:12:09.082 "message": "Invalid parameters" 00:12:09.082 }' 00:12:09.082 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:09.082 { 00:12:09.082 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:09.082 "listen_address": { 00:12:09.082 "trtype": "tcp", 00:12:09.082 "traddr": "", 00:12:09.082 "trsvcid": "4421" 00:12:09.082 }, 00:12:09.082 "method": "nvmf_subsystem_remove_listener", 00:12:09.082 "req_id": 1 00:12:09.082 } 00:12:09.082 Got JSON-RPC error response 00:12:09.082 response: 00:12:09.082 { 00:12:09.082 "code": -32602, 00:12:09.082 "message": "Invalid parameters" 00:12:09.082 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:09.082 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11335 -i 0 00:12:09.339 [2024-10-08 18:19:27.582893] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11335: invalid cntlid range [0-65519] 00:12:09.339 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:09.339 { 00:12:09.339 "nqn": "nqn.2016-06.io.spdk:cnode11335", 00:12:09.339 "min_cntlid": 0, 00:12:09.339 "method": "nvmf_create_subsystem", 00:12:09.339 "req_id": 1 00:12:09.339 } 00:12:09.339 Got JSON-RPC error response 00:12:09.339 response: 00:12:09.339 { 00:12:09.339 "code": -32602, 00:12:09.339 "message": "Invalid cntlid range [0-65519]" 00:12:09.339 }' 00:12:09.339 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:09.339 { 00:12:09.339 "nqn": "nqn.2016-06.io.spdk:cnode11335", 00:12:09.339 "min_cntlid": 0, 00:12:09.339 "method": "nvmf_create_subsystem", 00:12:09.339 "req_id": 1 00:12:09.339 } 00:12:09.339 Got JSON-RPC error response 00:12:09.339 response: 00:12:09.339 { 00:12:09.339 "code": -32602, 00:12:09.339 "message": "Invalid cntlid range [0-65519]" 00:12:09.339 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.339 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode137 -i 65520 00:12:09.597 [2024-10-08 18:19:27.859874] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode137: invalid cntlid range [65520-65519] 00:12:09.597 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:09.597 { 00:12:09.597 "nqn": "nqn.2016-06.io.spdk:cnode137", 00:12:09.597 "min_cntlid": 65520, 00:12:09.597 "method": "nvmf_create_subsystem", 00:12:09.597 "req_id": 1 00:12:09.597 } 00:12:09.597 Got JSON-RPC error response 00:12:09.597 response: 00:12:09.597 { 00:12:09.597 "code": -32602, 00:12:09.597 "message": "Invalid cntlid range [65520-65519]" 00:12:09.597 }' 00:12:09.597 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:09.597 { 00:12:09.597 "nqn": "nqn.2016-06.io.spdk:cnode137", 00:12:09.597 "min_cntlid": 65520, 00:12:09.597 "method": "nvmf_create_subsystem", 00:12:09.597 "req_id": 1 00:12:09.597 } 00:12:09.597 Got JSON-RPC error response 00:12:09.597 response: 00:12:09.597 { 00:12:09.597 "code": -32602, 00:12:09.597 "message": "Invalid cntlid range [65520-65519]" 00:12:09.597 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:09.597 18:19:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3153 -I 0 00:12:09.854 [2024-10-08 18:19:28.132756] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3153: invalid cntlid range [1-0] 00:12:10.112 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:10.112 { 00:12:10.112 "nqn": "nqn.2016-06.io.spdk:cnode3153", 00:12:10.112 "max_cntlid": 0, 00:12:10.112 "method": "nvmf_create_subsystem", 00:12:10.112 "req_id": 1 00:12:10.112 } 00:12:10.112 Got JSON-RPC error response 00:12:10.112 response: 00:12:10.112 { 00:12:10.112 "code": -32602, 00:12:10.112 "message": "Invalid cntlid range [1-0]" 00:12:10.112 }' 00:12:10.112 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:10.112 { 00:12:10.112 "nqn": "nqn.2016-06.io.spdk:cnode3153", 00:12:10.112 "max_cntlid": 0, 00:12:10.112 "method": "nvmf_create_subsystem", 00:12:10.112 "req_id": 1 00:12:10.112 } 00:12:10.112 Got JSON-RPC error response 00:12:10.112 response: 00:12:10.112 { 00:12:10.112 "code": -32602, 00:12:10.112 "message": "Invalid cntlid range [1-0]" 00:12:10.112 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.112 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30016 -I 65520 00:12:10.369 [2024-10-08 18:19:28.405671] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30016: invalid cntlid range [1-65520] 00:12:10.370 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:10.370 { 00:12:10.370 "nqn": "nqn.2016-06.io.spdk:cnode30016", 00:12:10.370 "max_cntlid": 65520, 00:12:10.370 "method": "nvmf_create_subsystem", 00:12:10.370 "req_id": 1 00:12:10.370 } 00:12:10.370 Got JSON-RPC error response 00:12:10.370 response: 00:12:10.370 { 00:12:10.370 "code": -32602, 00:12:10.370 "message": "Invalid cntlid range [1-65520]" 00:12:10.370 }' 00:12:10.370 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:10.370 { 00:12:10.370 "nqn": "nqn.2016-06.io.spdk:cnode30016", 00:12:10.370 "max_cntlid": 65520, 00:12:10.370 "method": "nvmf_create_subsystem", 00:12:10.370 "req_id": 1 00:12:10.370 } 00:12:10.370 Got JSON-RPC error response 00:12:10.370 response: 00:12:10.370 { 00:12:10.370 "code": -32602, 00:12:10.370 "message": "Invalid cntlid range [1-65520]" 00:12:10.370 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.370 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode410 -i 6 -I 5 00:12:10.627 [2024-10-08 18:19:28.670597] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode410: invalid cntlid range [6-5] 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:10.627 { 00:12:10.627 "nqn": "nqn.2016-06.io.spdk:cnode410", 00:12:10.627 "min_cntlid": 6, 00:12:10.627 "max_cntlid": 5, 00:12:10.627 "method": "nvmf_create_subsystem", 00:12:10.627 "req_id": 1 00:12:10.627 } 00:12:10.627 Got JSON-RPC error response 00:12:10.627 response: 00:12:10.627 { 00:12:10.627 "code": -32602, 00:12:10.627 "message": "Invalid cntlid range [6-5]" 00:12:10.627 }' 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:10.627 { 00:12:10.627 "nqn": "nqn.2016-06.io.spdk:cnode410", 00:12:10.627 "min_cntlid": 6, 00:12:10.627 "max_cntlid": 5, 00:12:10.627 "method": "nvmf_create_subsystem", 00:12:10.627 "req_id": 1 00:12:10.627 } 00:12:10.627 Got JSON-RPC error response 00:12:10.627 response: 00:12:10.627 { 00:12:10.627 "code": -32602, 00:12:10.627 "message": "Invalid cntlid range [6-5]" 00:12:10.627 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:10.627 { 00:12:10.627 "name": "foobar", 00:12:10.627 "method": "nvmf_delete_target", 00:12:10.627 "req_id": 1 00:12:10.627 } 00:12:10.627 Got JSON-RPC error response 00:12:10.627 response: 00:12:10.627 { 00:12:10.627 "code": -32602, 00:12:10.627 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:10.627 }' 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:10.627 { 00:12:10.627 "name": "foobar", 00:12:10.627 "method": "nvmf_delete_target", 00:12:10.627 "req_id": 1 00:12:10.627 } 00:12:10.627 Got JSON-RPC error response 00:12:10.627 response: 00:12:10.627 { 00:12:10.627 "code": -32602, 00:12:10.627 "message": "The specified target doesn't exist, cannot delete it." 00:12:10.627 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.627 rmmod nvme_tcp 00:12:10.627 rmmod nvme_fabrics 00:12:10.627 rmmod nvme_keyring 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1209670 ']' 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1209670 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1209670 ']' 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1209670 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:12:10.627 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.628 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1209670 00:12:10.884 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.884 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.884 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1209670' 00:12:10.884 killing process with pid 1209670 00:12:10.884 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1209670 00:12:10.884 18:19:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1209670 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.143 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.050 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:13.050 00:12:13.050 real 0m9.377s 00:12:13.050 user 0m22.315s 00:12:13.050 sys 0m2.558s 00:12:13.050 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.050 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:13.050 ************************************ 00:12:13.050 END TEST nvmf_invalid 00:12:13.050 ************************************ 00:12:13.050 18:19:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:13.050 18:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:13.050 18:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.051 18:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:13.051 ************************************ 00:12:13.051 START TEST nvmf_connect_stress 00:12:13.051 ************************************ 00:12:13.051 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:13.309 * Looking for test storage... 00:12:13.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:13.309 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:13.309 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:12:13.309 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:13.309 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:13.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.310 --rc genhtml_branch_coverage=1 00:12:13.310 --rc genhtml_function_coverage=1 00:12:13.310 --rc genhtml_legend=1 00:12:13.310 --rc geninfo_all_blocks=1 00:12:13.310 --rc geninfo_unexecuted_blocks=1 00:12:13.310 00:12:13.310 ' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:13.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.310 --rc genhtml_branch_coverage=1 00:12:13.310 --rc genhtml_function_coverage=1 00:12:13.310 --rc genhtml_legend=1 00:12:13.310 --rc geninfo_all_blocks=1 00:12:13.310 --rc geninfo_unexecuted_blocks=1 00:12:13.310 00:12:13.310 ' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:13.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.310 --rc genhtml_branch_coverage=1 00:12:13.310 --rc genhtml_function_coverage=1 00:12:13.310 --rc genhtml_legend=1 00:12:13.310 --rc geninfo_all_blocks=1 00:12:13.310 --rc geninfo_unexecuted_blocks=1 00:12:13.310 00:12:13.310 ' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:13.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.310 --rc genhtml_branch_coverage=1 00:12:13.310 --rc genhtml_function_coverage=1 00:12:13.310 --rc genhtml_legend=1 00:12:13.310 --rc geninfo_all_blocks=1 00:12:13.310 --rc geninfo_unexecuted_blocks=1 00:12:13.310 00:12:13.310 ' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:13.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:13.310 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:13.311 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:13.311 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:13.311 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:13.311 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.311 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.311 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:13.311 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:13.311 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:13.311 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:13.311 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.840 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:15.841 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:15.841 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:15.841 Found net devices under 0000:09:00.0: cvl_0_0 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:15.841 Found net devices under 0000:09:00.1: cvl_0_1 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:12:15.841 00:12:15.841 --- 10.0.0.2 ping statistics --- 00:12:15.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.841 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:12:15.841 00:12:15.841 --- 10.0.0.1 ping statistics --- 00:12:15.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.841 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1212316 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1212316 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1212316 ']' 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.841 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.841 [2024-10-08 18:19:33.819645] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:12:15.841 [2024-10-08 18:19:33.819741] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.841 [2024-10-08 18:19:33.884383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:15.841 [2024-10-08 18:19:33.997867] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.841 [2024-10-08 18:19:33.997925] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.842 [2024-10-08 18:19:33.997954] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.842 [2024-10-08 18:19:33.997965] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.842 [2024-10-08 18:19:33.997975] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.842 [2024-10-08 18:19:33.998983] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.842 [2024-10-08 18:19:33.999048] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.842 [2024-10-08 18:19:33.999052] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.842 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.842 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:12:15.842 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:15.842 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.124 [2024-10-08 18:19:34.158374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.124 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.125 [2024-10-08 18:19:34.184624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.125 NULL1 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1212345 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.125 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.416 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.416 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:16.416 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.416 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.416 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.674 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.674 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:16.674 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.674 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.674 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.931 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.931 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:16.931 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.931 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.931 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.496 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.496 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:17.496 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.496 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.496 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.753 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.753 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:17.753 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.753 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.753 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.011 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.011 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:18.011 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.011 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.011 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.268 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.268 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:18.268 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.268 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.268 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.833 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.833 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:18.833 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.833 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.833 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.090 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.090 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:19.090 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.090 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.090 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.349 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.349 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:19.349 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.349 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.349 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.607 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.607 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:19.607 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.607 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.607 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:19.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:19.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.865 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.430 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.430 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:20.430 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.430 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.430 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.688 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.688 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:20.688 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.688 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.688 18:19:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:20.945 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.945 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:20.945 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:20.945 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.945 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.203 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.203 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:21.203 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.203 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.203 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.460 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.460 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:21.460 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:21.460 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.460 18:19:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.024 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.024 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:22.024 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.024 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.024 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.282 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.282 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:22.282 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.282 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.282 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.539 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.539 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:22.539 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.539 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.539 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:22.797 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.797 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:22.797 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:22.797 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.797 18:19:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.055 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.055 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:23.055 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.055 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.055 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.620 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.620 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:23.620 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.620 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.620 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:23.878 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.878 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:23.878 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:23.878 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.878 18:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.136 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.136 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:24.136 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.136 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.136 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.393 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.394 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:24.394 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.394 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.394 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.651 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.651 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:24.651 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.651 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.651 18:19:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.216 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.216 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:25.216 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.216 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.216 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.473 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.473 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:25.473 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.473 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.473 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.730 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.730 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:25.730 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.730 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.730 18:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.987 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.987 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:25.987 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.987 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.987 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.245 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:26.245 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.245 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1212345 00:12:26.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1212345) - No such process 00:12:26.245 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1212345 00:12:26.245 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:26.245 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:26.245 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:26.245 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:26.245 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.503 rmmod nvme_tcp 00:12:26.503 rmmod nvme_fabrics 00:12:26.503 rmmod nvme_keyring 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1212316 ']' 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1212316 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1212316 ']' 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1212316 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1212316 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1212316' 00:12:26.503 killing process with pid 1212316 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1212316 00:12:26.503 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1212316 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.761 18:19:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.666 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.926 00:12:28.926 real 0m15.661s 00:12:28.926 user 0m38.730s 00:12:28.926 sys 0m5.988s 00:12:28.926 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.926 18:19:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.926 ************************************ 00:12:28.926 END TEST nvmf_connect_stress 00:12:28.926 ************************************ 00:12:28.926 18:19:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:28.926 18:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:28.926 18:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.926 18:19:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.926 ************************************ 00:12:28.926 START TEST nvmf_fused_ordering 00:12:28.926 ************************************ 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:28.926 * Looking for test storage... 00:12:28.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:28.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.926 --rc genhtml_branch_coverage=1 00:12:28.926 --rc genhtml_function_coverage=1 00:12:28.926 --rc genhtml_legend=1 00:12:28.926 --rc geninfo_all_blocks=1 00:12:28.926 --rc geninfo_unexecuted_blocks=1 00:12:28.926 00:12:28.926 ' 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:28.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.926 --rc genhtml_branch_coverage=1 00:12:28.926 --rc genhtml_function_coverage=1 00:12:28.926 --rc genhtml_legend=1 00:12:28.926 --rc geninfo_all_blocks=1 00:12:28.926 --rc geninfo_unexecuted_blocks=1 00:12:28.926 00:12:28.926 ' 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:28.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.926 --rc genhtml_branch_coverage=1 00:12:28.926 --rc genhtml_function_coverage=1 00:12:28.926 --rc genhtml_legend=1 00:12:28.926 --rc geninfo_all_blocks=1 00:12:28.926 --rc geninfo_unexecuted_blocks=1 00:12:28.926 00:12:28.926 ' 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:28.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.926 --rc genhtml_branch_coverage=1 00:12:28.926 --rc genhtml_function_coverage=1 00:12:28.926 --rc genhtml_legend=1 00:12:28.926 --rc geninfo_all_blocks=1 00:12:28.926 --rc geninfo_unexecuted_blocks=1 00:12:28.926 00:12:28.926 ' 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.926 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:28.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:28.927 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:31.456 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:31.456 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:31.456 Found net devices under 0000:09:00.0: cvl_0_0 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:31.456 Found net devices under 0000:09:00.1: cvl_0_1 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:31.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:12:31.456 00:12:31.456 --- 10.0.0.2 ping statistics --- 00:12:31.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.456 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:12:31.456 00:12:31.456 --- 10.0.0.1 ping statistics --- 00:12:31.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.456 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.456 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1215616 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1215616 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1215616 ']' 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.457 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.457 [2024-10-08 18:19:49.531672] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:12:31.457 [2024-10-08 18:19:49.531772] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.457 [2024-10-08 18:19:49.593745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.457 [2024-10-08 18:19:49.696480] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.457 [2024-10-08 18:19:49.696527] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.457 [2024-10-08 18:19:49.696549] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.457 [2024-10-08 18:19:49.696577] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.457 [2024-10-08 18:19:49.696586] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.457 [2024-10-08 18:19:49.697169] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.715 [2024-10-08 18:19:49.842498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.715 [2024-10-08 18:19:49.858750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.715 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.716 NULL1 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.716 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:31.716 [2024-10-08 18:19:49.902529] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:12:31.716 [2024-10-08 18:19:49.902565] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215644 ] 00:12:32.281 Attached to nqn.2016-06.io.spdk:cnode1 00:12:32.281 Namespace ID: 1 size: 1GB 00:12:32.281 fused_ordering(0) 00:12:32.281 fused_ordering(1) 00:12:32.281 fused_ordering(2) 00:12:32.281 fused_ordering(3) 00:12:32.281 fused_ordering(4) 00:12:32.281 fused_ordering(5) 00:12:32.281 fused_ordering(6) 00:12:32.281 fused_ordering(7) 00:12:32.281 fused_ordering(8) 00:12:32.281 fused_ordering(9) 00:12:32.281 fused_ordering(10) 00:12:32.281 fused_ordering(11) 00:12:32.281 fused_ordering(12) 00:12:32.281 fused_ordering(13) 00:12:32.281 fused_ordering(14) 00:12:32.281 fused_ordering(15) 00:12:32.281 fused_ordering(16) 00:12:32.281 fused_ordering(17) 00:12:32.281 fused_ordering(18) 00:12:32.281 fused_ordering(19) 00:12:32.281 fused_ordering(20) 00:12:32.281 fused_ordering(21) 00:12:32.281 fused_ordering(22) 00:12:32.281 fused_ordering(23) 00:12:32.281 fused_ordering(24) 00:12:32.281 fused_ordering(25) 00:12:32.281 fused_ordering(26) 00:12:32.281 fused_ordering(27) 00:12:32.281 fused_ordering(28) 00:12:32.281 fused_ordering(29) 00:12:32.281 fused_ordering(30) 00:12:32.281 fused_ordering(31) 00:12:32.281 fused_ordering(32) 00:12:32.281 fused_ordering(33) 00:12:32.281 fused_ordering(34) 00:12:32.281 fused_ordering(35) 00:12:32.281 fused_ordering(36) 00:12:32.281 fused_ordering(37) 00:12:32.281 fused_ordering(38) 00:12:32.281 fused_ordering(39) 00:12:32.281 fused_ordering(40) 00:12:32.281 fused_ordering(41) 00:12:32.281 fused_ordering(42) 00:12:32.281 fused_ordering(43) 00:12:32.281 fused_ordering(44) 00:12:32.281 fused_ordering(45) 00:12:32.281 fused_ordering(46) 00:12:32.281 fused_ordering(47) 00:12:32.281 fused_ordering(48) 00:12:32.281 fused_ordering(49) 00:12:32.281 fused_ordering(50) 00:12:32.281 fused_ordering(51) 00:12:32.281 fused_ordering(52) 00:12:32.281 fused_ordering(53) 00:12:32.281 fused_ordering(54) 00:12:32.281 fused_ordering(55) 00:12:32.281 fused_ordering(56) 00:12:32.281 fused_ordering(57) 00:12:32.281 fused_ordering(58) 00:12:32.281 fused_ordering(59) 00:12:32.281 fused_ordering(60) 00:12:32.281 fused_ordering(61) 00:12:32.281 fused_ordering(62) 00:12:32.281 fused_ordering(63) 00:12:32.281 fused_ordering(64) 00:12:32.281 fused_ordering(65) 00:12:32.281 fused_ordering(66) 00:12:32.281 fused_ordering(67) 00:12:32.281 fused_ordering(68) 00:12:32.281 fused_ordering(69) 00:12:32.281 fused_ordering(70) 00:12:32.281 fused_ordering(71) 00:12:32.281 fused_ordering(72) 00:12:32.281 fused_ordering(73) 00:12:32.281 fused_ordering(74) 00:12:32.281 fused_ordering(75) 00:12:32.281 fused_ordering(76) 00:12:32.281 fused_ordering(77) 00:12:32.281 fused_ordering(78) 00:12:32.281 fused_ordering(79) 00:12:32.281 fused_ordering(80) 00:12:32.281 fused_ordering(81) 00:12:32.281 fused_ordering(82) 00:12:32.281 fused_ordering(83) 00:12:32.281 fused_ordering(84) 00:12:32.281 fused_ordering(85) 00:12:32.281 fused_ordering(86) 00:12:32.281 fused_ordering(87) 00:12:32.281 fused_ordering(88) 00:12:32.281 fused_ordering(89) 00:12:32.281 fused_ordering(90) 00:12:32.281 fused_ordering(91) 00:12:32.281 fused_ordering(92) 00:12:32.281 fused_ordering(93) 00:12:32.281 fused_ordering(94) 00:12:32.281 fused_ordering(95) 00:12:32.281 fused_ordering(96) 00:12:32.281 fused_ordering(97) 00:12:32.281 fused_ordering(98) 00:12:32.281 fused_ordering(99) 00:12:32.281 fused_ordering(100) 00:12:32.281 fused_ordering(101) 00:12:32.281 fused_ordering(102) 00:12:32.281 fused_ordering(103) 00:12:32.281 fused_ordering(104) 00:12:32.281 fused_ordering(105) 00:12:32.281 fused_ordering(106) 00:12:32.281 fused_ordering(107) 00:12:32.281 fused_ordering(108) 00:12:32.281 fused_ordering(109) 00:12:32.281 fused_ordering(110) 00:12:32.281 fused_ordering(111) 00:12:32.281 fused_ordering(112) 00:12:32.281 fused_ordering(113) 00:12:32.281 fused_ordering(114) 00:12:32.281 fused_ordering(115) 00:12:32.281 fused_ordering(116) 00:12:32.281 fused_ordering(117) 00:12:32.281 fused_ordering(118) 00:12:32.281 fused_ordering(119) 00:12:32.281 fused_ordering(120) 00:12:32.281 fused_ordering(121) 00:12:32.281 fused_ordering(122) 00:12:32.281 fused_ordering(123) 00:12:32.281 fused_ordering(124) 00:12:32.281 fused_ordering(125) 00:12:32.281 fused_ordering(126) 00:12:32.281 fused_ordering(127) 00:12:32.281 fused_ordering(128) 00:12:32.281 fused_ordering(129) 00:12:32.281 fused_ordering(130) 00:12:32.281 fused_ordering(131) 00:12:32.281 fused_ordering(132) 00:12:32.281 fused_ordering(133) 00:12:32.281 fused_ordering(134) 00:12:32.281 fused_ordering(135) 00:12:32.281 fused_ordering(136) 00:12:32.281 fused_ordering(137) 00:12:32.281 fused_ordering(138) 00:12:32.281 fused_ordering(139) 00:12:32.281 fused_ordering(140) 00:12:32.281 fused_ordering(141) 00:12:32.281 fused_ordering(142) 00:12:32.281 fused_ordering(143) 00:12:32.281 fused_ordering(144) 00:12:32.281 fused_ordering(145) 00:12:32.282 fused_ordering(146) 00:12:32.282 fused_ordering(147) 00:12:32.282 fused_ordering(148) 00:12:32.282 fused_ordering(149) 00:12:32.282 fused_ordering(150) 00:12:32.282 fused_ordering(151) 00:12:32.282 fused_ordering(152) 00:12:32.282 fused_ordering(153) 00:12:32.282 fused_ordering(154) 00:12:32.282 fused_ordering(155) 00:12:32.282 fused_ordering(156) 00:12:32.282 fused_ordering(157) 00:12:32.282 fused_ordering(158) 00:12:32.282 fused_ordering(159) 00:12:32.282 fused_ordering(160) 00:12:32.282 fused_ordering(161) 00:12:32.282 fused_ordering(162) 00:12:32.282 fused_ordering(163) 00:12:32.282 fused_ordering(164) 00:12:32.282 fused_ordering(165) 00:12:32.282 fused_ordering(166) 00:12:32.282 fused_ordering(167) 00:12:32.282 fused_ordering(168) 00:12:32.282 fused_ordering(169) 00:12:32.282 fused_ordering(170) 00:12:32.282 fused_ordering(171) 00:12:32.282 fused_ordering(172) 00:12:32.282 fused_ordering(173) 00:12:32.282 fused_ordering(174) 00:12:32.282 fused_ordering(175) 00:12:32.282 fused_ordering(176) 00:12:32.282 fused_ordering(177) 00:12:32.282 fused_ordering(178) 00:12:32.282 fused_ordering(179) 00:12:32.282 fused_ordering(180) 00:12:32.282 fused_ordering(181) 00:12:32.282 fused_ordering(182) 00:12:32.282 fused_ordering(183) 00:12:32.282 fused_ordering(184) 00:12:32.282 fused_ordering(185) 00:12:32.282 fused_ordering(186) 00:12:32.282 fused_ordering(187) 00:12:32.282 fused_ordering(188) 00:12:32.282 fused_ordering(189) 00:12:32.282 fused_ordering(190) 00:12:32.282 fused_ordering(191) 00:12:32.282 fused_ordering(192) 00:12:32.282 fused_ordering(193) 00:12:32.282 fused_ordering(194) 00:12:32.282 fused_ordering(195) 00:12:32.282 fused_ordering(196) 00:12:32.282 fused_ordering(197) 00:12:32.282 fused_ordering(198) 00:12:32.282 fused_ordering(199) 00:12:32.282 fused_ordering(200) 00:12:32.282 fused_ordering(201) 00:12:32.282 fused_ordering(202) 00:12:32.282 fused_ordering(203) 00:12:32.282 fused_ordering(204) 00:12:32.282 fused_ordering(205) 00:12:32.539 fused_ordering(206) 00:12:32.539 fused_ordering(207) 00:12:32.539 fused_ordering(208) 00:12:32.539 fused_ordering(209) 00:12:32.539 fused_ordering(210) 00:12:32.539 fused_ordering(211) 00:12:32.539 fused_ordering(212) 00:12:32.539 fused_ordering(213) 00:12:32.539 fused_ordering(214) 00:12:32.539 fused_ordering(215) 00:12:32.539 fused_ordering(216) 00:12:32.539 fused_ordering(217) 00:12:32.539 fused_ordering(218) 00:12:32.539 fused_ordering(219) 00:12:32.539 fused_ordering(220) 00:12:32.539 fused_ordering(221) 00:12:32.540 fused_ordering(222) 00:12:32.540 fused_ordering(223) 00:12:32.540 fused_ordering(224) 00:12:32.540 fused_ordering(225) 00:12:32.540 fused_ordering(226) 00:12:32.540 fused_ordering(227) 00:12:32.540 fused_ordering(228) 00:12:32.540 fused_ordering(229) 00:12:32.540 fused_ordering(230) 00:12:32.540 fused_ordering(231) 00:12:32.540 fused_ordering(232) 00:12:32.540 fused_ordering(233) 00:12:32.540 fused_ordering(234) 00:12:32.540 fused_ordering(235) 00:12:32.540 fused_ordering(236) 00:12:32.540 fused_ordering(237) 00:12:32.540 fused_ordering(238) 00:12:32.540 fused_ordering(239) 00:12:32.540 fused_ordering(240) 00:12:32.540 fused_ordering(241) 00:12:32.540 fused_ordering(242) 00:12:32.540 fused_ordering(243) 00:12:32.540 fused_ordering(244) 00:12:32.540 fused_ordering(245) 00:12:32.540 fused_ordering(246) 00:12:32.540 fused_ordering(247) 00:12:32.540 fused_ordering(248) 00:12:32.540 fused_ordering(249) 00:12:32.540 fused_ordering(250) 00:12:32.540 fused_ordering(251) 00:12:32.540 fused_ordering(252) 00:12:32.540 fused_ordering(253) 00:12:32.540 fused_ordering(254) 00:12:32.540 fused_ordering(255) 00:12:32.540 fused_ordering(256) 00:12:32.540 fused_ordering(257) 00:12:32.540 fused_ordering(258) 00:12:32.540 fused_ordering(259) 00:12:32.540 fused_ordering(260) 00:12:32.540 fused_ordering(261) 00:12:32.540 fused_ordering(262) 00:12:32.540 fused_ordering(263) 00:12:32.540 fused_ordering(264) 00:12:32.540 fused_ordering(265) 00:12:32.540 fused_ordering(266) 00:12:32.540 fused_ordering(267) 00:12:32.540 fused_ordering(268) 00:12:32.540 fused_ordering(269) 00:12:32.540 fused_ordering(270) 00:12:32.540 fused_ordering(271) 00:12:32.540 fused_ordering(272) 00:12:32.540 fused_ordering(273) 00:12:32.540 fused_ordering(274) 00:12:32.540 fused_ordering(275) 00:12:32.540 fused_ordering(276) 00:12:32.540 fused_ordering(277) 00:12:32.540 fused_ordering(278) 00:12:32.540 fused_ordering(279) 00:12:32.540 fused_ordering(280) 00:12:32.540 fused_ordering(281) 00:12:32.540 fused_ordering(282) 00:12:32.540 fused_ordering(283) 00:12:32.540 fused_ordering(284) 00:12:32.540 fused_ordering(285) 00:12:32.540 fused_ordering(286) 00:12:32.540 fused_ordering(287) 00:12:32.540 fused_ordering(288) 00:12:32.540 fused_ordering(289) 00:12:32.540 fused_ordering(290) 00:12:32.540 fused_ordering(291) 00:12:32.540 fused_ordering(292) 00:12:32.540 fused_ordering(293) 00:12:32.540 fused_ordering(294) 00:12:32.540 fused_ordering(295) 00:12:32.540 fused_ordering(296) 00:12:32.540 fused_ordering(297) 00:12:32.540 fused_ordering(298) 00:12:32.540 fused_ordering(299) 00:12:32.540 fused_ordering(300) 00:12:32.540 fused_ordering(301) 00:12:32.540 fused_ordering(302) 00:12:32.540 fused_ordering(303) 00:12:32.540 fused_ordering(304) 00:12:32.540 fused_ordering(305) 00:12:32.540 fused_ordering(306) 00:12:32.540 fused_ordering(307) 00:12:32.540 fused_ordering(308) 00:12:32.540 fused_ordering(309) 00:12:32.540 fused_ordering(310) 00:12:32.540 fused_ordering(311) 00:12:32.540 fused_ordering(312) 00:12:32.540 fused_ordering(313) 00:12:32.540 fused_ordering(314) 00:12:32.540 fused_ordering(315) 00:12:32.540 fused_ordering(316) 00:12:32.540 fused_ordering(317) 00:12:32.540 fused_ordering(318) 00:12:32.540 fused_ordering(319) 00:12:32.540 fused_ordering(320) 00:12:32.540 fused_ordering(321) 00:12:32.540 fused_ordering(322) 00:12:32.540 fused_ordering(323) 00:12:32.540 fused_ordering(324) 00:12:32.540 fused_ordering(325) 00:12:32.540 fused_ordering(326) 00:12:32.540 fused_ordering(327) 00:12:32.540 fused_ordering(328) 00:12:32.540 fused_ordering(329) 00:12:32.540 fused_ordering(330) 00:12:32.540 fused_ordering(331) 00:12:32.540 fused_ordering(332) 00:12:32.540 fused_ordering(333) 00:12:32.540 fused_ordering(334) 00:12:32.540 fused_ordering(335) 00:12:32.540 fused_ordering(336) 00:12:32.540 fused_ordering(337) 00:12:32.540 fused_ordering(338) 00:12:32.540 fused_ordering(339) 00:12:32.540 fused_ordering(340) 00:12:32.540 fused_ordering(341) 00:12:32.540 fused_ordering(342) 00:12:32.540 fused_ordering(343) 00:12:32.540 fused_ordering(344) 00:12:32.540 fused_ordering(345) 00:12:32.540 fused_ordering(346) 00:12:32.540 fused_ordering(347) 00:12:32.540 fused_ordering(348) 00:12:32.540 fused_ordering(349) 00:12:32.540 fused_ordering(350) 00:12:32.540 fused_ordering(351) 00:12:32.540 fused_ordering(352) 00:12:32.540 fused_ordering(353) 00:12:32.540 fused_ordering(354) 00:12:32.540 fused_ordering(355) 00:12:32.540 fused_ordering(356) 00:12:32.540 fused_ordering(357) 00:12:32.540 fused_ordering(358) 00:12:32.540 fused_ordering(359) 00:12:32.540 fused_ordering(360) 00:12:32.540 fused_ordering(361) 00:12:32.540 fused_ordering(362) 00:12:32.540 fused_ordering(363) 00:12:32.540 fused_ordering(364) 00:12:32.540 fused_ordering(365) 00:12:32.540 fused_ordering(366) 00:12:32.540 fused_ordering(367) 00:12:32.540 fused_ordering(368) 00:12:32.540 fused_ordering(369) 00:12:32.540 fused_ordering(370) 00:12:32.540 fused_ordering(371) 00:12:32.540 fused_ordering(372) 00:12:32.540 fused_ordering(373) 00:12:32.540 fused_ordering(374) 00:12:32.540 fused_ordering(375) 00:12:32.540 fused_ordering(376) 00:12:32.540 fused_ordering(377) 00:12:32.540 fused_ordering(378) 00:12:32.540 fused_ordering(379) 00:12:32.540 fused_ordering(380) 00:12:32.540 fused_ordering(381) 00:12:32.540 fused_ordering(382) 00:12:32.540 fused_ordering(383) 00:12:32.540 fused_ordering(384) 00:12:32.540 fused_ordering(385) 00:12:32.540 fused_ordering(386) 00:12:32.540 fused_ordering(387) 00:12:32.540 fused_ordering(388) 00:12:32.540 fused_ordering(389) 00:12:32.540 fused_ordering(390) 00:12:32.540 fused_ordering(391) 00:12:32.540 fused_ordering(392) 00:12:32.540 fused_ordering(393) 00:12:32.540 fused_ordering(394) 00:12:32.540 fused_ordering(395) 00:12:32.540 fused_ordering(396) 00:12:32.540 fused_ordering(397) 00:12:32.540 fused_ordering(398) 00:12:32.540 fused_ordering(399) 00:12:32.540 fused_ordering(400) 00:12:32.540 fused_ordering(401) 00:12:32.540 fused_ordering(402) 00:12:32.540 fused_ordering(403) 00:12:32.540 fused_ordering(404) 00:12:32.540 fused_ordering(405) 00:12:32.540 fused_ordering(406) 00:12:32.540 fused_ordering(407) 00:12:32.540 fused_ordering(408) 00:12:32.540 fused_ordering(409) 00:12:32.540 fused_ordering(410) 00:12:32.798 fused_ordering(411) 00:12:32.798 fused_ordering(412) 00:12:32.798 fused_ordering(413) 00:12:32.798 fused_ordering(414) 00:12:32.798 fused_ordering(415) 00:12:32.798 fused_ordering(416) 00:12:32.798 fused_ordering(417) 00:12:32.798 fused_ordering(418) 00:12:32.798 fused_ordering(419) 00:12:32.798 fused_ordering(420) 00:12:32.798 fused_ordering(421) 00:12:32.798 fused_ordering(422) 00:12:32.798 fused_ordering(423) 00:12:32.798 fused_ordering(424) 00:12:32.798 fused_ordering(425) 00:12:32.798 fused_ordering(426) 00:12:32.798 fused_ordering(427) 00:12:32.798 fused_ordering(428) 00:12:32.798 fused_ordering(429) 00:12:32.798 fused_ordering(430) 00:12:32.798 fused_ordering(431) 00:12:32.798 fused_ordering(432) 00:12:32.798 fused_ordering(433) 00:12:32.798 fused_ordering(434) 00:12:32.798 fused_ordering(435) 00:12:32.798 fused_ordering(436) 00:12:32.798 fused_ordering(437) 00:12:32.798 fused_ordering(438) 00:12:32.798 fused_ordering(439) 00:12:32.798 fused_ordering(440) 00:12:32.798 fused_ordering(441) 00:12:32.798 fused_ordering(442) 00:12:32.798 fused_ordering(443) 00:12:32.798 fused_ordering(444) 00:12:32.798 fused_ordering(445) 00:12:32.798 fused_ordering(446) 00:12:32.798 fused_ordering(447) 00:12:32.798 fused_ordering(448) 00:12:32.798 fused_ordering(449) 00:12:32.798 fused_ordering(450) 00:12:32.798 fused_ordering(451) 00:12:32.798 fused_ordering(452) 00:12:32.798 fused_ordering(453) 00:12:32.798 fused_ordering(454) 00:12:32.798 fused_ordering(455) 00:12:32.798 fused_ordering(456) 00:12:32.798 fused_ordering(457) 00:12:32.798 fused_ordering(458) 00:12:32.798 fused_ordering(459) 00:12:32.798 fused_ordering(460) 00:12:32.798 fused_ordering(461) 00:12:32.798 fused_ordering(462) 00:12:32.798 fused_ordering(463) 00:12:32.798 fused_ordering(464) 00:12:32.798 fused_ordering(465) 00:12:32.798 fused_ordering(466) 00:12:32.798 fused_ordering(467) 00:12:32.798 fused_ordering(468) 00:12:32.798 fused_ordering(469) 00:12:32.798 fused_ordering(470) 00:12:32.798 fused_ordering(471) 00:12:32.798 fused_ordering(472) 00:12:32.798 fused_ordering(473) 00:12:32.798 fused_ordering(474) 00:12:32.798 fused_ordering(475) 00:12:32.798 fused_ordering(476) 00:12:32.798 fused_ordering(477) 00:12:32.798 fused_ordering(478) 00:12:32.798 fused_ordering(479) 00:12:32.798 fused_ordering(480) 00:12:32.798 fused_ordering(481) 00:12:32.798 fused_ordering(482) 00:12:32.798 fused_ordering(483) 00:12:32.798 fused_ordering(484) 00:12:32.798 fused_ordering(485) 00:12:32.798 fused_ordering(486) 00:12:32.798 fused_ordering(487) 00:12:32.798 fused_ordering(488) 00:12:32.798 fused_ordering(489) 00:12:32.798 fused_ordering(490) 00:12:32.798 fused_ordering(491) 00:12:32.798 fused_ordering(492) 00:12:32.798 fused_ordering(493) 00:12:32.798 fused_ordering(494) 00:12:32.798 fused_ordering(495) 00:12:32.798 fused_ordering(496) 00:12:32.798 fused_ordering(497) 00:12:32.798 fused_ordering(498) 00:12:32.798 fused_ordering(499) 00:12:32.798 fused_ordering(500) 00:12:32.798 fused_ordering(501) 00:12:32.798 fused_ordering(502) 00:12:32.798 fused_ordering(503) 00:12:32.798 fused_ordering(504) 00:12:32.798 fused_ordering(505) 00:12:32.798 fused_ordering(506) 00:12:32.798 fused_ordering(507) 00:12:32.798 fused_ordering(508) 00:12:32.798 fused_ordering(509) 00:12:32.798 fused_ordering(510) 00:12:32.798 fused_ordering(511) 00:12:32.798 fused_ordering(512) 00:12:32.798 fused_ordering(513) 00:12:32.798 fused_ordering(514) 00:12:32.798 fused_ordering(515) 00:12:32.798 fused_ordering(516) 00:12:32.798 fused_ordering(517) 00:12:32.798 fused_ordering(518) 00:12:32.798 fused_ordering(519) 00:12:32.798 fused_ordering(520) 00:12:32.798 fused_ordering(521) 00:12:32.798 fused_ordering(522) 00:12:32.798 fused_ordering(523) 00:12:32.798 fused_ordering(524) 00:12:32.798 fused_ordering(525) 00:12:32.798 fused_ordering(526) 00:12:32.798 fused_ordering(527) 00:12:32.798 fused_ordering(528) 00:12:32.798 fused_ordering(529) 00:12:32.798 fused_ordering(530) 00:12:32.798 fused_ordering(531) 00:12:32.798 fused_ordering(532) 00:12:32.798 fused_ordering(533) 00:12:32.798 fused_ordering(534) 00:12:32.798 fused_ordering(535) 00:12:32.798 fused_ordering(536) 00:12:32.798 fused_ordering(537) 00:12:32.798 fused_ordering(538) 00:12:32.798 fused_ordering(539) 00:12:32.798 fused_ordering(540) 00:12:32.799 fused_ordering(541) 00:12:32.799 fused_ordering(542) 00:12:32.799 fused_ordering(543) 00:12:32.799 fused_ordering(544) 00:12:32.799 fused_ordering(545) 00:12:32.799 fused_ordering(546) 00:12:32.799 fused_ordering(547) 00:12:32.799 fused_ordering(548) 00:12:32.799 fused_ordering(549) 00:12:32.799 fused_ordering(550) 00:12:32.799 fused_ordering(551) 00:12:32.799 fused_ordering(552) 00:12:32.799 fused_ordering(553) 00:12:32.799 fused_ordering(554) 00:12:32.799 fused_ordering(555) 00:12:32.799 fused_ordering(556) 00:12:32.799 fused_ordering(557) 00:12:32.799 fused_ordering(558) 00:12:32.799 fused_ordering(559) 00:12:32.799 fused_ordering(560) 00:12:32.799 fused_ordering(561) 00:12:32.799 fused_ordering(562) 00:12:32.799 fused_ordering(563) 00:12:32.799 fused_ordering(564) 00:12:32.799 fused_ordering(565) 00:12:32.799 fused_ordering(566) 00:12:32.799 fused_ordering(567) 00:12:32.799 fused_ordering(568) 00:12:32.799 fused_ordering(569) 00:12:32.799 fused_ordering(570) 00:12:32.799 fused_ordering(571) 00:12:32.799 fused_ordering(572) 00:12:32.799 fused_ordering(573) 00:12:32.799 fused_ordering(574) 00:12:32.799 fused_ordering(575) 00:12:32.799 fused_ordering(576) 00:12:32.799 fused_ordering(577) 00:12:32.799 fused_ordering(578) 00:12:32.799 fused_ordering(579) 00:12:32.799 fused_ordering(580) 00:12:32.799 fused_ordering(581) 00:12:32.799 fused_ordering(582) 00:12:32.799 fused_ordering(583) 00:12:32.799 fused_ordering(584) 00:12:32.799 fused_ordering(585) 00:12:32.799 fused_ordering(586) 00:12:32.799 fused_ordering(587) 00:12:32.799 fused_ordering(588) 00:12:32.799 fused_ordering(589) 00:12:32.799 fused_ordering(590) 00:12:32.799 fused_ordering(591) 00:12:32.799 fused_ordering(592) 00:12:32.799 fused_ordering(593) 00:12:32.799 fused_ordering(594) 00:12:32.799 fused_ordering(595) 00:12:32.799 fused_ordering(596) 00:12:32.799 fused_ordering(597) 00:12:32.799 fused_ordering(598) 00:12:32.799 fused_ordering(599) 00:12:32.799 fused_ordering(600) 00:12:32.799 fused_ordering(601) 00:12:32.799 fused_ordering(602) 00:12:32.799 fused_ordering(603) 00:12:32.799 fused_ordering(604) 00:12:32.799 fused_ordering(605) 00:12:32.799 fused_ordering(606) 00:12:32.799 fused_ordering(607) 00:12:32.799 fused_ordering(608) 00:12:32.799 fused_ordering(609) 00:12:32.799 fused_ordering(610) 00:12:32.799 fused_ordering(611) 00:12:32.799 fused_ordering(612) 00:12:32.799 fused_ordering(613) 00:12:32.799 fused_ordering(614) 00:12:32.799 fused_ordering(615) 00:12:33.364 fused_ordering(616) 00:12:33.364 fused_ordering(617) 00:12:33.364 fused_ordering(618) 00:12:33.364 fused_ordering(619) 00:12:33.364 fused_ordering(620) 00:12:33.364 fused_ordering(621) 00:12:33.364 fused_ordering(622) 00:12:33.364 fused_ordering(623) 00:12:33.364 fused_ordering(624) 00:12:33.364 fused_ordering(625) 00:12:33.364 fused_ordering(626) 00:12:33.364 fused_ordering(627) 00:12:33.364 fused_ordering(628) 00:12:33.364 fused_ordering(629) 00:12:33.364 fused_ordering(630) 00:12:33.364 fused_ordering(631) 00:12:33.364 fused_ordering(632) 00:12:33.364 fused_ordering(633) 00:12:33.364 fused_ordering(634) 00:12:33.364 fused_ordering(635) 00:12:33.364 fused_ordering(636) 00:12:33.364 fused_ordering(637) 00:12:33.364 fused_ordering(638) 00:12:33.364 fused_ordering(639) 00:12:33.364 fused_ordering(640) 00:12:33.364 fused_ordering(641) 00:12:33.364 fused_ordering(642) 00:12:33.364 fused_ordering(643) 00:12:33.364 fused_ordering(644) 00:12:33.364 fused_ordering(645) 00:12:33.364 fused_ordering(646) 00:12:33.364 fused_ordering(647) 00:12:33.364 fused_ordering(648) 00:12:33.364 fused_ordering(649) 00:12:33.364 fused_ordering(650) 00:12:33.364 fused_ordering(651) 00:12:33.364 fused_ordering(652) 00:12:33.364 fused_ordering(653) 00:12:33.364 fused_ordering(654) 00:12:33.364 fused_ordering(655) 00:12:33.364 fused_ordering(656) 00:12:33.364 fused_ordering(657) 00:12:33.364 fused_ordering(658) 00:12:33.364 fused_ordering(659) 00:12:33.364 fused_ordering(660) 00:12:33.364 fused_ordering(661) 00:12:33.364 fused_ordering(662) 00:12:33.364 fused_ordering(663) 00:12:33.364 fused_ordering(664) 00:12:33.364 fused_ordering(665) 00:12:33.364 fused_ordering(666) 00:12:33.364 fused_ordering(667) 00:12:33.364 fused_ordering(668) 00:12:33.364 fused_ordering(669) 00:12:33.364 fused_ordering(670) 00:12:33.364 fused_ordering(671) 00:12:33.364 fused_ordering(672) 00:12:33.364 fused_ordering(673) 00:12:33.364 fused_ordering(674) 00:12:33.364 fused_ordering(675) 00:12:33.364 fused_ordering(676) 00:12:33.364 fused_ordering(677) 00:12:33.364 fused_ordering(678) 00:12:33.364 fused_ordering(679) 00:12:33.364 fused_ordering(680) 00:12:33.364 fused_ordering(681) 00:12:33.364 fused_ordering(682) 00:12:33.364 fused_ordering(683) 00:12:33.364 fused_ordering(684) 00:12:33.364 fused_ordering(685) 00:12:33.364 fused_ordering(686) 00:12:33.364 fused_ordering(687) 00:12:33.364 fused_ordering(688) 00:12:33.364 fused_ordering(689) 00:12:33.364 fused_ordering(690) 00:12:33.364 fused_ordering(691) 00:12:33.364 fused_ordering(692) 00:12:33.364 fused_ordering(693) 00:12:33.364 fused_ordering(694) 00:12:33.364 fused_ordering(695) 00:12:33.364 fused_ordering(696) 00:12:33.364 fused_ordering(697) 00:12:33.364 fused_ordering(698) 00:12:33.364 fused_ordering(699) 00:12:33.364 fused_ordering(700) 00:12:33.364 fused_ordering(701) 00:12:33.364 fused_ordering(702) 00:12:33.364 fused_ordering(703) 00:12:33.364 fused_ordering(704) 00:12:33.364 fused_ordering(705) 00:12:33.364 fused_ordering(706) 00:12:33.364 fused_ordering(707) 00:12:33.364 fused_ordering(708) 00:12:33.364 fused_ordering(709) 00:12:33.364 fused_ordering(710) 00:12:33.364 fused_ordering(711) 00:12:33.364 fused_ordering(712) 00:12:33.364 fused_ordering(713) 00:12:33.364 fused_ordering(714) 00:12:33.364 fused_ordering(715) 00:12:33.364 fused_ordering(716) 00:12:33.364 fused_ordering(717) 00:12:33.364 fused_ordering(718) 00:12:33.364 fused_ordering(719) 00:12:33.364 fused_ordering(720) 00:12:33.364 fused_ordering(721) 00:12:33.364 fused_ordering(722) 00:12:33.364 fused_ordering(723) 00:12:33.364 fused_ordering(724) 00:12:33.364 fused_ordering(725) 00:12:33.364 fused_ordering(726) 00:12:33.364 fused_ordering(727) 00:12:33.364 fused_ordering(728) 00:12:33.364 fused_ordering(729) 00:12:33.364 fused_ordering(730) 00:12:33.364 fused_ordering(731) 00:12:33.364 fused_ordering(732) 00:12:33.364 fused_ordering(733) 00:12:33.364 fused_ordering(734) 00:12:33.364 fused_ordering(735) 00:12:33.364 fused_ordering(736) 00:12:33.364 fused_ordering(737) 00:12:33.364 fused_ordering(738) 00:12:33.364 fused_ordering(739) 00:12:33.364 fused_ordering(740) 00:12:33.364 fused_ordering(741) 00:12:33.364 fused_ordering(742) 00:12:33.364 fused_ordering(743) 00:12:33.364 fused_ordering(744) 00:12:33.364 fused_ordering(745) 00:12:33.364 fused_ordering(746) 00:12:33.364 fused_ordering(747) 00:12:33.364 fused_ordering(748) 00:12:33.364 fused_ordering(749) 00:12:33.364 fused_ordering(750) 00:12:33.364 fused_ordering(751) 00:12:33.364 fused_ordering(752) 00:12:33.364 fused_ordering(753) 00:12:33.364 fused_ordering(754) 00:12:33.364 fused_ordering(755) 00:12:33.364 fused_ordering(756) 00:12:33.364 fused_ordering(757) 00:12:33.364 fused_ordering(758) 00:12:33.364 fused_ordering(759) 00:12:33.364 fused_ordering(760) 00:12:33.364 fused_ordering(761) 00:12:33.364 fused_ordering(762) 00:12:33.364 fused_ordering(763) 00:12:33.364 fused_ordering(764) 00:12:33.364 fused_ordering(765) 00:12:33.364 fused_ordering(766) 00:12:33.364 fused_ordering(767) 00:12:33.364 fused_ordering(768) 00:12:33.364 fused_ordering(769) 00:12:33.364 fused_ordering(770) 00:12:33.364 fused_ordering(771) 00:12:33.364 fused_ordering(772) 00:12:33.364 fused_ordering(773) 00:12:33.364 fused_ordering(774) 00:12:33.364 fused_ordering(775) 00:12:33.364 fused_ordering(776) 00:12:33.364 fused_ordering(777) 00:12:33.364 fused_ordering(778) 00:12:33.364 fused_ordering(779) 00:12:33.364 fused_ordering(780) 00:12:33.364 fused_ordering(781) 00:12:33.364 fused_ordering(782) 00:12:33.364 fused_ordering(783) 00:12:33.364 fused_ordering(784) 00:12:33.364 fused_ordering(785) 00:12:33.364 fused_ordering(786) 00:12:33.364 fused_ordering(787) 00:12:33.364 fused_ordering(788) 00:12:33.364 fused_ordering(789) 00:12:33.364 fused_ordering(790) 00:12:33.364 fused_ordering(791) 00:12:33.364 fused_ordering(792) 00:12:33.364 fused_ordering(793) 00:12:33.364 fused_ordering(794) 00:12:33.364 fused_ordering(795) 00:12:33.364 fused_ordering(796) 00:12:33.364 fused_ordering(797) 00:12:33.364 fused_ordering(798) 00:12:33.364 fused_ordering(799) 00:12:33.364 fused_ordering(800) 00:12:33.364 fused_ordering(801) 00:12:33.364 fused_ordering(802) 00:12:33.364 fused_ordering(803) 00:12:33.364 fused_ordering(804) 00:12:33.364 fused_ordering(805) 00:12:33.364 fused_ordering(806) 00:12:33.364 fused_ordering(807) 00:12:33.364 fused_ordering(808) 00:12:33.364 fused_ordering(809) 00:12:33.364 fused_ordering(810) 00:12:33.364 fused_ordering(811) 00:12:33.364 fused_ordering(812) 00:12:33.364 fused_ordering(813) 00:12:33.364 fused_ordering(814) 00:12:33.364 fused_ordering(815) 00:12:33.364 fused_ordering(816) 00:12:33.364 fused_ordering(817) 00:12:33.364 fused_ordering(818) 00:12:33.364 fused_ordering(819) 00:12:33.364 fused_ordering(820) 00:12:33.929 fused_ordering(821) 00:12:33.929 fused_ordering(822) 00:12:33.929 fused_ordering(823) 00:12:33.929 fused_ordering(824) 00:12:33.929 fused_ordering(825) 00:12:33.929 fused_ordering(826) 00:12:33.929 fused_ordering(827) 00:12:33.929 fused_ordering(828) 00:12:33.929 fused_ordering(829) 00:12:33.929 fused_ordering(830) 00:12:33.929 fused_ordering(831) 00:12:33.929 fused_ordering(832) 00:12:33.929 fused_ordering(833) 00:12:33.929 fused_ordering(834) 00:12:33.930 fused_ordering(835) 00:12:33.930 fused_ordering(836) 00:12:33.930 fused_ordering(837) 00:12:33.930 fused_ordering(838) 00:12:33.930 fused_ordering(839) 00:12:33.930 fused_ordering(840) 00:12:33.930 fused_ordering(841) 00:12:33.930 fused_ordering(842) 00:12:33.930 fused_ordering(843) 00:12:33.930 fused_ordering(844) 00:12:33.930 fused_ordering(845) 00:12:33.930 fused_ordering(846) 00:12:33.930 fused_ordering(847) 00:12:33.930 fused_ordering(848) 00:12:33.930 fused_ordering(849) 00:12:33.930 fused_ordering(850) 00:12:33.930 fused_ordering(851) 00:12:33.930 fused_ordering(852) 00:12:33.930 fused_ordering(853) 00:12:33.930 fused_ordering(854) 00:12:33.930 fused_ordering(855) 00:12:33.930 fused_ordering(856) 00:12:33.930 fused_ordering(857) 00:12:33.930 fused_ordering(858) 00:12:33.930 fused_ordering(859) 00:12:33.930 fused_ordering(860) 00:12:33.930 fused_ordering(861) 00:12:33.930 fused_ordering(862) 00:12:33.930 fused_ordering(863) 00:12:33.930 fused_ordering(864) 00:12:33.930 fused_ordering(865) 00:12:33.930 fused_ordering(866) 00:12:33.930 fused_ordering(867) 00:12:33.930 fused_ordering(868) 00:12:33.930 fused_ordering(869) 00:12:33.930 fused_ordering(870) 00:12:33.930 fused_ordering(871) 00:12:33.930 fused_ordering(872) 00:12:33.930 fused_ordering(873) 00:12:33.930 fused_ordering(874) 00:12:33.930 fused_ordering(875) 00:12:33.930 fused_ordering(876) 00:12:33.930 fused_ordering(877) 00:12:33.930 fused_ordering(878) 00:12:33.930 fused_ordering(879) 00:12:33.930 fused_ordering(880) 00:12:33.930 fused_ordering(881) 00:12:33.930 fused_ordering(882) 00:12:33.930 fused_ordering(883) 00:12:33.930 fused_ordering(884) 00:12:33.930 fused_ordering(885) 00:12:33.930 fused_ordering(886) 00:12:33.930 fused_ordering(887) 00:12:33.930 fused_ordering(888) 00:12:33.930 fused_ordering(889) 00:12:33.930 fused_ordering(890) 00:12:33.930 fused_ordering(891) 00:12:33.930 fused_ordering(892) 00:12:33.930 fused_ordering(893) 00:12:33.930 fused_ordering(894) 00:12:33.930 fused_ordering(895) 00:12:33.930 fused_ordering(896) 00:12:33.930 fused_ordering(897) 00:12:33.930 fused_ordering(898) 00:12:33.930 fused_ordering(899) 00:12:33.930 fused_ordering(900) 00:12:33.930 fused_ordering(901) 00:12:33.930 fused_ordering(902) 00:12:33.930 fused_ordering(903) 00:12:33.930 fused_ordering(904) 00:12:33.930 fused_ordering(905) 00:12:33.930 fused_ordering(906) 00:12:33.930 fused_ordering(907) 00:12:33.930 fused_ordering(908) 00:12:33.930 fused_ordering(909) 00:12:33.930 fused_ordering(910) 00:12:33.930 fused_ordering(911) 00:12:33.930 fused_ordering(912) 00:12:33.930 fused_ordering(913) 00:12:33.930 fused_ordering(914) 00:12:33.930 fused_ordering(915) 00:12:33.930 fused_ordering(916) 00:12:33.930 fused_ordering(917) 00:12:33.930 fused_ordering(918) 00:12:33.930 fused_ordering(919) 00:12:33.930 fused_ordering(920) 00:12:33.930 fused_ordering(921) 00:12:33.930 fused_ordering(922) 00:12:33.930 fused_ordering(923) 00:12:33.930 fused_ordering(924) 00:12:33.930 fused_ordering(925) 00:12:33.930 fused_ordering(926) 00:12:33.930 fused_ordering(927) 00:12:33.930 fused_ordering(928) 00:12:33.930 fused_ordering(929) 00:12:33.930 fused_ordering(930) 00:12:33.930 fused_ordering(931) 00:12:33.930 fused_ordering(932) 00:12:33.930 fused_ordering(933) 00:12:33.930 fused_ordering(934) 00:12:33.930 fused_ordering(935) 00:12:33.930 fused_ordering(936) 00:12:33.930 fused_ordering(937) 00:12:33.930 fused_ordering(938) 00:12:33.930 fused_ordering(939) 00:12:33.930 fused_ordering(940) 00:12:33.930 fused_ordering(941) 00:12:33.930 fused_ordering(942) 00:12:33.930 fused_ordering(943) 00:12:33.930 fused_ordering(944) 00:12:33.930 fused_ordering(945) 00:12:33.930 fused_ordering(946) 00:12:33.930 fused_ordering(947) 00:12:33.930 fused_ordering(948) 00:12:33.930 fused_ordering(949) 00:12:33.930 fused_ordering(950) 00:12:33.930 fused_ordering(951) 00:12:33.930 fused_ordering(952) 00:12:33.930 fused_ordering(953) 00:12:33.930 fused_ordering(954) 00:12:33.930 fused_ordering(955) 00:12:33.930 fused_ordering(956) 00:12:33.930 fused_ordering(957) 00:12:33.930 fused_ordering(958) 00:12:33.930 fused_ordering(959) 00:12:33.930 fused_ordering(960) 00:12:33.930 fused_ordering(961) 00:12:33.930 fused_ordering(962) 00:12:33.930 fused_ordering(963) 00:12:33.930 fused_ordering(964) 00:12:33.930 fused_ordering(965) 00:12:33.930 fused_ordering(966) 00:12:33.930 fused_ordering(967) 00:12:33.930 fused_ordering(968) 00:12:33.930 fused_ordering(969) 00:12:33.930 fused_ordering(970) 00:12:33.930 fused_ordering(971) 00:12:33.930 fused_ordering(972) 00:12:33.930 fused_ordering(973) 00:12:33.930 fused_ordering(974) 00:12:33.930 fused_ordering(975) 00:12:33.930 fused_ordering(976) 00:12:33.930 fused_ordering(977) 00:12:33.930 fused_ordering(978) 00:12:33.930 fused_ordering(979) 00:12:33.930 fused_ordering(980) 00:12:33.930 fused_ordering(981) 00:12:33.930 fused_ordering(982) 00:12:33.930 fused_ordering(983) 00:12:33.930 fused_ordering(984) 00:12:33.930 fused_ordering(985) 00:12:33.930 fused_ordering(986) 00:12:33.930 fused_ordering(987) 00:12:33.930 fused_ordering(988) 00:12:33.930 fused_ordering(989) 00:12:33.930 fused_ordering(990) 00:12:33.930 fused_ordering(991) 00:12:33.930 fused_ordering(992) 00:12:33.930 fused_ordering(993) 00:12:33.930 fused_ordering(994) 00:12:33.930 fused_ordering(995) 00:12:33.930 fused_ordering(996) 00:12:33.930 fused_ordering(997) 00:12:33.930 fused_ordering(998) 00:12:33.930 fused_ordering(999) 00:12:33.930 fused_ordering(1000) 00:12:33.930 fused_ordering(1001) 00:12:33.930 fused_ordering(1002) 00:12:33.930 fused_ordering(1003) 00:12:33.930 fused_ordering(1004) 00:12:33.930 fused_ordering(1005) 00:12:33.930 fused_ordering(1006) 00:12:33.930 fused_ordering(1007) 00:12:33.930 fused_ordering(1008) 00:12:33.930 fused_ordering(1009) 00:12:33.930 fused_ordering(1010) 00:12:33.930 fused_ordering(1011) 00:12:33.930 fused_ordering(1012) 00:12:33.930 fused_ordering(1013) 00:12:33.930 fused_ordering(1014) 00:12:33.930 fused_ordering(1015) 00:12:33.930 fused_ordering(1016) 00:12:33.930 fused_ordering(1017) 00:12:33.930 fused_ordering(1018) 00:12:33.930 fused_ordering(1019) 00:12:33.930 fused_ordering(1020) 00:12:33.930 fused_ordering(1021) 00:12:33.930 fused_ordering(1022) 00:12:33.930 fused_ordering(1023) 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.930 rmmod nvme_tcp 00:12:33.930 rmmod nvme_fabrics 00:12:33.930 rmmod nvme_keyring 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1215616 ']' 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1215616 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1215616 ']' 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1215616 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:33.930 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1215616 00:12:34.188 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:34.188 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:34.188 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1215616' 00:12:34.188 killing process with pid 1215616 00:12:34.188 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1215616 00:12:34.188 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1215616 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.446 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.352 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.352 00:12:36.352 real 0m7.549s 00:12:36.352 user 0m4.930s 00:12:36.352 sys 0m3.203s 00:12:36.352 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.352 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:36.352 ************************************ 00:12:36.352 END TEST nvmf_fused_ordering 00:12:36.352 ************************************ 00:12:36.352 18:19:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:36.352 18:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:36.352 18:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.352 18:19:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.352 ************************************ 00:12:36.352 START TEST nvmf_ns_masking 00:12:36.352 ************************************ 00:12:36.352 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:36.352 * Looking for test storage... 00:12:36.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:36.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.611 --rc genhtml_branch_coverage=1 00:12:36.611 --rc genhtml_function_coverage=1 00:12:36.611 --rc genhtml_legend=1 00:12:36.611 --rc geninfo_all_blocks=1 00:12:36.611 --rc geninfo_unexecuted_blocks=1 00:12:36.611 00:12:36.611 ' 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:36.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.611 --rc genhtml_branch_coverage=1 00:12:36.611 --rc genhtml_function_coverage=1 00:12:36.611 --rc genhtml_legend=1 00:12:36.611 --rc geninfo_all_blocks=1 00:12:36.611 --rc geninfo_unexecuted_blocks=1 00:12:36.611 00:12:36.611 ' 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:36.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.611 --rc genhtml_branch_coverage=1 00:12:36.611 --rc genhtml_function_coverage=1 00:12:36.611 --rc genhtml_legend=1 00:12:36.611 --rc geninfo_all_blocks=1 00:12:36.611 --rc geninfo_unexecuted_blocks=1 00:12:36.611 00:12:36.611 ' 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:36.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.611 --rc genhtml_branch_coverage=1 00:12:36.611 --rc genhtml_function_coverage=1 00:12:36.611 --rc genhtml_legend=1 00:12:36.611 --rc geninfo_all_blocks=1 00:12:36.611 --rc geninfo_unexecuted_blocks=1 00:12:36.611 00:12:36.611 ' 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.611 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6c627bda-7236-4288-9a68-4d7b97740f94 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ab5693fa-4819-4008-a650-3990866492d6 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=321c542a-2a63-45a3-8251-389c9da855a8 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.612 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:39.146 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.146 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:39.147 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:39.147 Found net devices under 0000:09:00.0: cvl_0_0 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:39.147 Found net devices under 0000:09:00.1: cvl_0_1 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:39.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:12:39.147 00:12:39.147 --- 10.0.0.2 ping statistics --- 00:12:39.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.147 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:12:39.147 00:12:39.147 --- 10.0.0.1 ping statistics --- 00:12:39.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.147 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:39.147 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1217856 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1217856 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1217856 ']' 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:39.147 [2024-10-08 18:19:57.063255] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:12:39.147 [2024-10-08 18:19:57.063354] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.147 [2024-10-08 18:19:57.124701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.147 [2024-10-08 18:19:57.226961] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.147 [2024-10-08 18:19:57.227008] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.147 [2024-10-08 18:19:57.227030] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.147 [2024-10-08 18:19:57.227041] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.147 [2024-10-08 18:19:57.227049] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.147 [2024-10-08 18:19:57.227716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.147 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:39.405 [2024-10-08 18:19:57.622916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.405 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:39.405 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:39.405 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:39.663 Malloc1 00:12:39.664 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:39.922 Malloc2 00:12:40.186 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:40.454 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:40.712 18:19:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.970 [2024-10-08 18:19:59.013748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.970 18:19:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:40.970 18:19:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 321c542a-2a63-45a3-8251-389c9da855a8 -a 10.0.0.2 -s 4420 -i 4 00:12:40.970 18:19:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.970 18:19:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.970 18:19:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.970 18:19:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:40.970 18:19:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.550 [ 0]:0x1 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f391e285f55e48ba845fdb575f72566b 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f391e285f55e48ba845fdb575f72566b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.550 [ 0]:0x1 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f391e285f55e48ba845fdb575f72566b 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f391e285f55e48ba845fdb575f72566b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.550 [ 1]:0x2 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c060da4233b455284cc3289573792d2 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c060da4233b455284cc3289573792d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.550 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.550 18:20:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.808 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:44.374 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:44.374 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 321c542a-2a63-45a3-8251-389c9da855a8 -a 10.0.0.2 -s 4420 -i 4 00:12:44.374 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:44.374 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:44.374 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.374 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:44.374 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:44.374 18:20:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:46.272 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:46.530 [ 0]:0x2 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c060da4233b455284cc3289573792d2 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c060da4233b455284cc3289573792d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.530 18:20:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:46.788 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:46.788 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.788 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:46.788 [ 0]:0x1 00:12:46.788 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:46.788 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.045 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f391e285f55e48ba845fdb575f72566b 00:12:47.046 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f391e285f55e48ba845fdb575f72566b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.046 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:47.046 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.046 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:47.046 [ 1]:0x2 00:12:47.046 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.046 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.046 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c060da4233b455284cc3289573792d2 00:12:47.046 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c060da4233b455284cc3289573792d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.046 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:47.304 [ 0]:0x2 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c060da4233b455284cc3289573792d2 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c060da4233b455284cc3289573792d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.304 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:47.869 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:47.869 18:20:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 321c542a-2a63-45a3-8251-389c9da855a8 -a 10.0.0.2 -s 4420 -i 4 00:12:47.869 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:47.869 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:47.869 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.869 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:47.869 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:47.869 18:20:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:50.394 [ 0]:0x1 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f391e285f55e48ba845fdb575f72566b 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f391e285f55e48ba845fdb575f72566b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:50.394 [ 1]:0x2 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c060da4233b455284cc3289573792d2 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c060da4233b455284cc3289573792d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.394 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:50.652 [ 0]:0x2 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c060da4233b455284cc3289573792d2 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c060da4233b455284cc3289573792d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:50.652 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:50.653 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.653 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.653 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.653 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.653 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.653 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.653 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.653 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:50.653 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:51.218 [2024-10-08 18:20:09.220354] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:51.218 request: 00:12:51.218 { 00:12:51.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:51.218 "nsid": 2, 00:12:51.218 "host": "nqn.2016-06.io.spdk:host1", 00:12:51.218 "method": "nvmf_ns_remove_host", 00:12:51.218 "req_id": 1 00:12:51.218 } 00:12:51.218 Got JSON-RPC error response 00:12:51.218 response: 00:12:51.218 { 00:12:51.218 "code": -32602, 00:12:51.218 "message": "Invalid parameters" 00:12:51.218 } 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:51.218 [ 0]:0x2 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7c060da4233b455284cc3289573792d2 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7c060da4233b455284cc3289573792d2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:51.218 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.476 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1219481 00:12:51.476 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:51.476 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.476 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1219481 /var/tmp/host.sock 00:12:51.476 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1219481 ']' 00:12:51.476 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:12:51.476 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.476 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:51.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:51.476 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.476 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:51.476 [2024-10-08 18:20:09.593429] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:12:51.476 [2024-10-08 18:20:09.593525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219481 ] 00:12:51.476 [2024-10-08 18:20:09.652433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.476 [2024-10-08 18:20:09.762539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.043 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.043 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:12:52.043 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.301 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:52.560 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6c627bda-7236-4288-9a68-4d7b97740f94 00:12:52.560 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:12:52.560 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6C627BDA723642889A684D7B97740F94 -i 00:12:52.818 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ab5693fa-4819-4008-a650-3990866492d6 00:12:52.819 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:12:52.819 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g AB5693FA48194008A6503990866492D6 -i 00:12:53.076 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:53.334 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:53.591 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:53.591 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:53.849 nvme0n1 00:12:53.849 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:53.849 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:54.415 nvme1n2 00:12:54.415 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:54.415 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:54.415 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:54.415 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:54.415 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:54.673 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:54.673 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:54.673 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:54.673 18:20:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:54.931 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6c627bda-7236-4288-9a68-4d7b97740f94 == \6\c\6\2\7\b\d\a\-\7\2\3\6\-\4\2\8\8\-\9\a\6\8\-\4\d\7\b\9\7\7\4\0\f\9\4 ]] 00:12:54.931 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:54.931 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:54.931 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ab5693fa-4819-4008-a650-3990866492d6 == \a\b\5\6\9\3\f\a\-\4\8\1\9\-\4\0\0\8\-\a\6\5\0\-\3\9\9\0\8\6\6\4\9\2\d\6 ]] 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1219481 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1219481 ']' 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1219481 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1219481 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1219481' 00:12:55.189 killing process with pid 1219481 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1219481 00:12:55.189 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1219481 00:12:55.754 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.011 rmmod nvme_tcp 00:12:56.011 rmmod nvme_fabrics 00:12:56.011 rmmod nvme_keyring 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1217856 ']' 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1217856 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1217856 ']' 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1217856 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.011 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1217856 00:12:56.270 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:56.270 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:56.270 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1217856' 00:12:56.270 killing process with pid 1217856 00:12:56.270 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1217856 00:12:56.270 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1217856 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.529 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.434 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:58.434 00:12:58.434 real 0m22.074s 00:12:58.434 user 0m29.134s 00:12:58.434 sys 0m4.262s 00:12:58.434 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.434 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:58.434 ************************************ 00:12:58.434 END TEST nvmf_ns_masking 00:12:58.434 ************************************ 00:12:58.434 18:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:58.434 18:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:58.434 18:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:58.434 18:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.434 18:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.691 ************************************ 00:12:58.691 START TEST nvmf_nvme_cli 00:12:58.691 ************************************ 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:58.691 * Looking for test storage... 00:12:58.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.691 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:58.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.692 --rc genhtml_branch_coverage=1 00:12:58.692 --rc genhtml_function_coverage=1 00:12:58.692 --rc genhtml_legend=1 00:12:58.692 --rc geninfo_all_blocks=1 00:12:58.692 --rc geninfo_unexecuted_blocks=1 00:12:58.692 00:12:58.692 ' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:58.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.692 --rc genhtml_branch_coverage=1 00:12:58.692 --rc genhtml_function_coverage=1 00:12:58.692 --rc genhtml_legend=1 00:12:58.692 --rc geninfo_all_blocks=1 00:12:58.692 --rc geninfo_unexecuted_blocks=1 00:12:58.692 00:12:58.692 ' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:58.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.692 --rc genhtml_branch_coverage=1 00:12:58.692 --rc genhtml_function_coverage=1 00:12:58.692 --rc genhtml_legend=1 00:12:58.692 --rc geninfo_all_blocks=1 00:12:58.692 --rc geninfo_unexecuted_blocks=1 00:12:58.692 00:12:58.692 ' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:58.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.692 --rc genhtml_branch_coverage=1 00:12:58.692 --rc genhtml_function_coverage=1 00:12:58.692 --rc genhtml_legend=1 00:12:58.692 --rc geninfo_all_blocks=1 00:12:58.692 --rc geninfo_unexecuted_blocks=1 00:12:58.692 00:12:58.692 ' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:58.692 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.226 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:01.227 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:01.227 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:01.227 Found net devices under 0000:09:00.0: cvl_0_0 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:01.227 Found net devices under 0000:09:00.1: cvl_0_1 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:01.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:13:01.227 00:13:01.227 --- 10.0.0.2 ping statistics --- 00:13:01.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.227 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:13:01.227 00:13:01.227 --- 10.0.0.1 ping statistics --- 00:13:01.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.227 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:01.227 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.228 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1222118 00:13:01.228 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1222118 00:13:01.228 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1222118 ']' 00:13:01.228 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.228 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.228 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.228 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.228 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.228 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.228 [2024-10-08 18:20:19.349908] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:13:01.228 [2024-10-08 18:20:19.350005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.228 [2024-10-08 18:20:19.412485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.486 [2024-10-08 18:20:19.520638] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.486 [2024-10-08 18:20:19.520686] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.486 [2024-10-08 18:20:19.520709] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.486 [2024-10-08 18:20:19.520719] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.486 [2024-10-08 18:20:19.520728] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.486 [2024-10-08 18:20:19.522409] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.486 [2024-10-08 18:20:19.522469] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.486 [2024-10-08 18:20:19.522533] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.486 [2024-10-08 18:20:19.522537] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.486 [2024-10-08 18:20:19.674228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.486 Malloc0 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.486 Malloc1 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.486 [2024-10-08 18:20:19.756375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.486 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:13:01.745 00:13:01.745 Discovery Log Number of Records 2, Generation counter 2 00:13:01.745 =====Discovery Log Entry 0====== 00:13:01.745 trtype: tcp 00:13:01.745 adrfam: ipv4 00:13:01.745 subtype: current discovery subsystem 00:13:01.745 treq: not required 00:13:01.745 portid: 0 00:13:01.745 trsvcid: 4420 00:13:01.745 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:01.745 traddr: 10.0.0.2 00:13:01.745 eflags: explicit discovery connections, duplicate discovery information 00:13:01.745 sectype: none 00:13:01.745 =====Discovery Log Entry 1====== 00:13:01.745 trtype: tcp 00:13:01.745 adrfam: ipv4 00:13:01.745 subtype: nvme subsystem 00:13:01.745 treq: not required 00:13:01.745 portid: 0 00:13:01.745 trsvcid: 4420 00:13:01.745 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:01.745 traddr: 10.0.0.2 00:13:01.745 eflags: none 00:13:01.745 sectype: none 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:01.745 18:20:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.311 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:02.311 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:02.311 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.311 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:02.311 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:02.311 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:04.841 /dev/nvme0n2 ]] 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.841 rmmod nvme_tcp 00:13:04.841 rmmod nvme_fabrics 00:13:04.841 rmmod nvme_keyring 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1222118 ']' 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1222118 00:13:04.841 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1222118 ']' 00:13:04.842 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1222118 00:13:04.842 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:13:04.842 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:04.842 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1222118 00:13:04.842 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:04.842 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:04.842 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1222118' 00:13:04.842 killing process with pid 1222118 00:13:04.842 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1222118 00:13:04.842 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1222118 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.101 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.000 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:07.000 00:13:07.000 real 0m8.476s 00:13:07.000 user 0m14.956s 00:13:07.000 sys 0m2.428s 00:13:07.000 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.000 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.000 ************************************ 00:13:07.000 END TEST nvmf_nvme_cli 00:13:07.000 ************************************ 00:13:07.000 18:20:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:07.001 18:20:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:07.001 18:20:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:07.001 18:20:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.001 18:20:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.001 ************************************ 00:13:07.001 START TEST nvmf_vfio_user 00:13:07.001 ************************************ 00:13:07.001 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:07.260 * Looking for test storage... 00:13:07.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:07.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.260 --rc genhtml_branch_coverage=1 00:13:07.260 --rc genhtml_function_coverage=1 00:13:07.260 --rc genhtml_legend=1 00:13:07.260 --rc geninfo_all_blocks=1 00:13:07.260 --rc geninfo_unexecuted_blocks=1 00:13:07.260 00:13:07.260 ' 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:07.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.260 --rc genhtml_branch_coverage=1 00:13:07.260 --rc genhtml_function_coverage=1 00:13:07.260 --rc genhtml_legend=1 00:13:07.260 --rc geninfo_all_blocks=1 00:13:07.260 --rc geninfo_unexecuted_blocks=1 00:13:07.260 00:13:07.260 ' 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:07.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.260 --rc genhtml_branch_coverage=1 00:13:07.260 --rc genhtml_function_coverage=1 00:13:07.260 --rc genhtml_legend=1 00:13:07.260 --rc geninfo_all_blocks=1 00:13:07.260 --rc geninfo_unexecuted_blocks=1 00:13:07.260 00:13:07.260 ' 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:07.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.260 --rc genhtml_branch_coverage=1 00:13:07.260 --rc genhtml_function_coverage=1 00:13:07.260 --rc genhtml_legend=1 00:13:07.260 --rc geninfo_all_blocks=1 00:13:07.260 --rc geninfo_unexecuted_blocks=1 00:13:07.260 00:13:07.260 ' 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.260 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1222932 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1222932' 00:13:07.261 Process pid: 1222932 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1222932 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1222932 ']' 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:07.261 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:07.261 [2024-10-08 18:20:25.459476] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:13:07.261 [2024-10-08 18:20:25.459557] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.261 [2024-10-08 18:20:25.518111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.519 [2024-10-08 18:20:25.629520] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.519 [2024-10-08 18:20:25.629602] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.519 [2024-10-08 18:20:25.629616] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.519 [2024-10-08 18:20:25.629641] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.519 [2024-10-08 18:20:25.629651] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.519 [2024-10-08 18:20:25.631380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.519 [2024-10-08 18:20:25.631446] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.519 [2024-10-08 18:20:25.631513] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.519 [2024-10-08 18:20:25.631516] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.519 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.519 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:13:07.519 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:08.894 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:08.894 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:08.894 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:08.894 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:08.894 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:08.894 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:09.152 Malloc1 00:13:09.152 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:09.410 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:09.667 18:20:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:09.925 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:09.925 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:09.925 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:10.184 Malloc2 00:13:10.184 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:10.442 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:10.700 18:20:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:11.268 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:11.268 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:11.268 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:11.268 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:11.268 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:11.268 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:11.268 [2024-10-08 18:20:29.279425] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:13:11.268 [2024-10-08 18:20:29.279465] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223479 ] 00:13:11.268 [2024-10-08 18:20:29.309718] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:11.268 [2024-10-08 18:20:29.320845] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:11.268 [2024-10-08 18:20:29.320876] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7d9c651000 00:13:11.268 [2024-10-08 18:20:29.321840] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.268 [2024-10-08 18:20:29.322838] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.268 [2024-10-08 18:20:29.323847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.268 [2024-10-08 18:20:29.324849] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:11.268 [2024-10-08 18:20:29.325852] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:11.268 [2024-10-08 18:20:29.326856] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.268 [2024-10-08 18:20:29.327860] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:11.268 [2024-10-08 18:20:29.328863] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.268 [2024-10-08 18:20:29.329869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:11.268 [2024-10-08 18:20:29.329888] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7d9c646000 00:13:11.268 [2024-10-08 18:20:29.331015] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:11.268 [2024-10-08 18:20:29.345978] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:11.268 [2024-10-08 18:20:29.346016] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:11.268 [2024-10-08 18:20:29.350989] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:11.268 [2024-10-08 18:20:29.351042] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:11.268 [2024-10-08 18:20:29.351130] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:11.268 [2024-10-08 18:20:29.351160] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:11.268 [2024-10-08 18:20:29.351170] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:11.268 [2024-10-08 18:20:29.351985] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:11.268 [2024-10-08 18:20:29.352005] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:11.268 [2024-10-08 18:20:29.352017] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:11.268 [2024-10-08 18:20:29.352988] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:11.268 [2024-10-08 18:20:29.353007] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:11.268 [2024-10-08 18:20:29.353019] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:11.268 [2024-10-08 18:20:29.353990] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:11.268 [2024-10-08 18:20:29.354008] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:11.268 [2024-10-08 18:20:29.354996] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:11.268 [2024-10-08 18:20:29.355016] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:11.268 [2024-10-08 18:20:29.355024] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:11.268 [2024-10-08 18:20:29.355040] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:11.268 [2024-10-08 18:20:29.355149] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:11.268 [2024-10-08 18:20:29.355157] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:11.268 [2024-10-08 18:20:29.355164] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:11.268 [2024-10-08 18:20:29.356000] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:11.268 [2024-10-08 18:20:29.357007] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:11.268 [2024-10-08 18:20:29.358013] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:11.268 [2024-10-08 18:20:29.359012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:11.268 [2024-10-08 18:20:29.359154] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:11.268 [2024-10-08 18:20:29.360033] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:11.268 [2024-10-08 18:20:29.360051] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:11.268 [2024-10-08 18:20:29.360060] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:11.268 [2024-10-08 18:20:29.360083] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:11.268 [2024-10-08 18:20:29.360100] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:11.268 [2024-10-08 18:20:29.360122] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:11.268 [2024-10-08 18:20:29.360132] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.268 [2024-10-08 18:20:29.360138] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.268 [2024-10-08 18:20:29.360156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.268 [2024-10-08 18:20:29.360217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:11.268 [2024-10-08 18:20:29.360232] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:11.268 [2024-10-08 18:20:29.360240] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:11.268 [2024-10-08 18:20:29.360246] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:11.268 [2024-10-08 18:20:29.360253] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:11.268 [2024-10-08 18:20:29.360261] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:11.268 [2024-10-08 18:20:29.360284] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:11.268 [2024-10-08 18:20:29.360305] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:11.268 [2024-10-08 18:20:29.360324] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:11.268 [2024-10-08 18:20:29.360340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:11.268 [2024-10-08 18:20:29.360363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:11.268 [2024-10-08 18:20:29.360379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.268 [2024-10-08 18:20:29.360393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.268 [2024-10-08 18:20:29.360405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.269 [2024-10-08 18:20:29.360418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.269 [2024-10-08 18:20:29.360426] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360443] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.360471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.360482] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:11.269 [2024-10-08 18:20:29.360490] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360501] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360514] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.360540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.360617] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360648] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360662] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:11.269 [2024-10-08 18:20:29.360670] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:11.269 [2024-10-08 18:20:29.360676] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.269 [2024-10-08 18:20:29.360685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.360701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.360721] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:11.269 [2024-10-08 18:20:29.360737] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360750] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360762] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:11.269 [2024-10-08 18:20:29.360770] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.269 [2024-10-08 18:20:29.360776] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.269 [2024-10-08 18:20:29.360785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.360811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.360831] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360845] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360857] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:11.269 [2024-10-08 18:20:29.360865] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.269 [2024-10-08 18:20:29.360870] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.269 [2024-10-08 18:20:29.360879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.360895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.360908] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360919] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360932] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360941] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360949] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360957] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360964] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:11.269 [2024-10-08 18:20:29.360971] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:11.269 [2024-10-08 18:20:29.360979] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:11.269 [2024-10-08 18:20:29.361003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.361020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.361040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.361052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.361068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.361079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.361094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.361105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.361127] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:11.269 [2024-10-08 18:20:29.361136] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:11.269 [2024-10-08 18:20:29.361142] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:11.269 [2024-10-08 18:20:29.361148] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:11.269 [2024-10-08 18:20:29.361154] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:11.269 [2024-10-08 18:20:29.361163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:11.269 [2024-10-08 18:20:29.361174] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:11.269 [2024-10-08 18:20:29.361182] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:11.269 [2024-10-08 18:20:29.361187] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.269 [2024-10-08 18:20:29.361196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.361206] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:11.269 [2024-10-08 18:20:29.361214] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.269 [2024-10-08 18:20:29.361220] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.269 [2024-10-08 18:20:29.361228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.361240] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:11.269 [2024-10-08 18:20:29.361247] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:11.269 [2024-10-08 18:20:29.361253] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.269 [2024-10-08 18:20:29.361262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:11.269 [2024-10-08 18:20:29.361296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.361320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.361338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:11.269 [2024-10-08 18:20:29.361351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:11.269 ===================================================== 00:13:11.269 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:11.269 ===================================================== 00:13:11.269 Controller Capabilities/Features 00:13:11.269 ================================ 00:13:11.269 Vendor ID: 4e58 00:13:11.269 Subsystem Vendor ID: 4e58 00:13:11.269 Serial Number: SPDK1 00:13:11.269 Model Number: SPDK bdev Controller 00:13:11.269 Firmware Version: 25.01 00:13:11.269 Recommended Arb Burst: 6 00:13:11.269 IEEE OUI Identifier: 8d 6b 50 00:13:11.269 Multi-path I/O 00:13:11.269 May have multiple subsystem ports: Yes 00:13:11.269 May have multiple controllers: Yes 00:13:11.269 Associated with SR-IOV VF: No 00:13:11.269 Max Data Transfer Size: 131072 00:13:11.269 Max Number of Namespaces: 32 00:13:11.269 Max Number of I/O Queues: 127 00:13:11.269 NVMe Specification Version (VS): 1.3 00:13:11.269 NVMe Specification Version (Identify): 1.3 00:13:11.269 Maximum Queue Entries: 256 00:13:11.269 Contiguous Queues Required: Yes 00:13:11.269 Arbitration Mechanisms Supported 00:13:11.269 Weighted Round Robin: Not Supported 00:13:11.269 Vendor Specific: Not Supported 00:13:11.269 Reset Timeout: 15000 ms 00:13:11.269 Doorbell Stride: 4 bytes 00:13:11.269 NVM Subsystem Reset: Not Supported 00:13:11.269 Command Sets Supported 00:13:11.269 NVM Command Set: Supported 00:13:11.269 Boot Partition: Not Supported 00:13:11.269 Memory Page Size Minimum: 4096 bytes 00:13:11.269 Memory Page Size Maximum: 4096 bytes 00:13:11.269 Persistent Memory Region: Not Supported 00:13:11.269 Optional Asynchronous Events Supported 00:13:11.270 Namespace Attribute Notices: Supported 00:13:11.270 Firmware Activation Notices: Not Supported 00:13:11.270 ANA Change Notices: Not Supported 00:13:11.270 PLE Aggregate Log Change Notices: Not Supported 00:13:11.270 LBA Status Info Alert Notices: Not Supported 00:13:11.270 EGE Aggregate Log Change Notices: Not Supported 00:13:11.270 Normal NVM Subsystem Shutdown event: Not Supported 00:13:11.270 Zone Descriptor Change Notices: Not Supported 00:13:11.270 Discovery Log Change Notices: Not Supported 00:13:11.270 Controller Attributes 00:13:11.270 128-bit Host Identifier: Supported 00:13:11.270 Non-Operational Permissive Mode: Not Supported 00:13:11.270 NVM Sets: Not Supported 00:13:11.270 Read Recovery Levels: Not Supported 00:13:11.270 Endurance Groups: Not Supported 00:13:11.270 Predictable Latency Mode: Not Supported 00:13:11.270 Traffic Based Keep ALive: Not Supported 00:13:11.270 Namespace Granularity: Not Supported 00:13:11.270 SQ Associations: Not Supported 00:13:11.270 UUID List: Not Supported 00:13:11.270 Multi-Domain Subsystem: Not Supported 00:13:11.270 Fixed Capacity Management: Not Supported 00:13:11.270 Variable Capacity Management: Not Supported 00:13:11.270 Delete Endurance Group: Not Supported 00:13:11.270 Delete NVM Set: Not Supported 00:13:11.270 Extended LBA Formats Supported: Not Supported 00:13:11.270 Flexible Data Placement Supported: Not Supported 00:13:11.270 00:13:11.270 Controller Memory Buffer Support 00:13:11.270 ================================ 00:13:11.270 Supported: No 00:13:11.270 00:13:11.270 Persistent Memory Region Support 00:13:11.270 ================================ 00:13:11.270 Supported: No 00:13:11.270 00:13:11.270 Admin Command Set Attributes 00:13:11.270 ============================ 00:13:11.270 Security Send/Receive: Not Supported 00:13:11.270 Format NVM: Not Supported 00:13:11.270 Firmware Activate/Download: Not Supported 00:13:11.270 Namespace Management: Not Supported 00:13:11.270 Device Self-Test: Not Supported 00:13:11.270 Directives: Not Supported 00:13:11.270 NVMe-MI: Not Supported 00:13:11.270 Virtualization Management: Not Supported 00:13:11.270 Doorbell Buffer Config: Not Supported 00:13:11.270 Get LBA Status Capability: Not Supported 00:13:11.270 Command & Feature Lockdown Capability: Not Supported 00:13:11.270 Abort Command Limit: 4 00:13:11.270 Async Event Request Limit: 4 00:13:11.270 Number of Firmware Slots: N/A 00:13:11.270 Firmware Slot 1 Read-Only: N/A 00:13:11.270 Firmware Activation Without Reset: N/A 00:13:11.270 Multiple Update Detection Support: N/A 00:13:11.270 Firmware Update Granularity: No Information Provided 00:13:11.270 Per-Namespace SMART Log: No 00:13:11.270 Asymmetric Namespace Access Log Page: Not Supported 00:13:11.270 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:11.270 Command Effects Log Page: Supported 00:13:11.270 Get Log Page Extended Data: Supported 00:13:11.270 Telemetry Log Pages: Not Supported 00:13:11.270 Persistent Event Log Pages: Not Supported 00:13:11.270 Supported Log Pages Log Page: May Support 00:13:11.270 Commands Supported & Effects Log Page: Not Supported 00:13:11.270 Feature Identifiers & Effects Log Page:May Support 00:13:11.270 NVMe-MI Commands & Effects Log Page: May Support 00:13:11.270 Data Area 4 for Telemetry Log: Not Supported 00:13:11.270 Error Log Page Entries Supported: 128 00:13:11.270 Keep Alive: Supported 00:13:11.270 Keep Alive Granularity: 10000 ms 00:13:11.270 00:13:11.270 NVM Command Set Attributes 00:13:11.270 ========================== 00:13:11.270 Submission Queue Entry Size 00:13:11.270 Max: 64 00:13:11.270 Min: 64 00:13:11.270 Completion Queue Entry Size 00:13:11.270 Max: 16 00:13:11.270 Min: 16 00:13:11.270 Number of Namespaces: 32 00:13:11.270 Compare Command: Supported 00:13:11.270 Write Uncorrectable Command: Not Supported 00:13:11.270 Dataset Management Command: Supported 00:13:11.270 Write Zeroes Command: Supported 00:13:11.270 Set Features Save Field: Not Supported 00:13:11.270 Reservations: Not Supported 00:13:11.270 Timestamp: Not Supported 00:13:11.270 Copy: Supported 00:13:11.270 Volatile Write Cache: Present 00:13:11.270 Atomic Write Unit (Normal): 1 00:13:11.270 Atomic Write Unit (PFail): 1 00:13:11.270 Atomic Compare & Write Unit: 1 00:13:11.270 Fused Compare & Write: Supported 00:13:11.270 Scatter-Gather List 00:13:11.270 SGL Command Set: Supported (Dword aligned) 00:13:11.270 SGL Keyed: Not Supported 00:13:11.270 SGL Bit Bucket Descriptor: Not Supported 00:13:11.270 SGL Metadata Pointer: Not Supported 00:13:11.270 Oversized SGL: Not Supported 00:13:11.270 SGL Metadata Address: Not Supported 00:13:11.270 SGL Offset: Not Supported 00:13:11.270 Transport SGL Data Block: Not Supported 00:13:11.270 Replay Protected Memory Block: Not Supported 00:13:11.270 00:13:11.270 Firmware Slot Information 00:13:11.270 ========================= 00:13:11.270 Active slot: 1 00:13:11.270 Slot 1 Firmware Revision: 25.01 00:13:11.270 00:13:11.270 00:13:11.270 Commands Supported and Effects 00:13:11.270 ============================== 00:13:11.270 Admin Commands 00:13:11.270 -------------- 00:13:11.270 Get Log Page (02h): Supported 00:13:11.270 Identify (06h): Supported 00:13:11.270 Abort (08h): Supported 00:13:11.270 Set Features (09h): Supported 00:13:11.270 Get Features (0Ah): Supported 00:13:11.270 Asynchronous Event Request (0Ch): Supported 00:13:11.270 Keep Alive (18h): Supported 00:13:11.270 I/O Commands 00:13:11.270 ------------ 00:13:11.270 Flush (00h): Supported LBA-Change 00:13:11.270 Write (01h): Supported LBA-Change 00:13:11.270 Read (02h): Supported 00:13:11.270 Compare (05h): Supported 00:13:11.270 Write Zeroes (08h): Supported LBA-Change 00:13:11.270 Dataset Management (09h): Supported LBA-Change 00:13:11.270 Copy (19h): Supported LBA-Change 00:13:11.270 00:13:11.270 Error Log 00:13:11.270 ========= 00:13:11.270 00:13:11.270 Arbitration 00:13:11.270 =========== 00:13:11.270 Arbitration Burst: 1 00:13:11.270 00:13:11.270 Power Management 00:13:11.270 ================ 00:13:11.270 Number of Power States: 1 00:13:11.270 Current Power State: Power State #0 00:13:11.270 Power State #0: 00:13:11.270 Max Power: 0.00 W 00:13:11.270 Non-Operational State: Operational 00:13:11.270 Entry Latency: Not Reported 00:13:11.270 Exit Latency: Not Reported 00:13:11.270 Relative Read Throughput: 0 00:13:11.270 Relative Read Latency: 0 00:13:11.270 Relative Write Throughput: 0 00:13:11.270 Relative Write Latency: 0 00:13:11.270 Idle Power: Not Reported 00:13:11.270 Active Power: Not Reported 00:13:11.270 Non-Operational Permissive Mode: Not Supported 00:13:11.270 00:13:11.270 Health Information 00:13:11.270 ================== 00:13:11.270 Critical Warnings: 00:13:11.270 Available Spare Space: OK 00:13:11.270 Temperature: OK 00:13:11.270 Device Reliability: OK 00:13:11.270 Read Only: No 00:13:11.270 Volatile Memory Backup: OK 00:13:11.270 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:11.270 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:11.270 Available Spare: 0% 00:13:11.270 Available Sp[2024-10-08 18:20:29.361472] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:11.270 [2024-10-08 18:20:29.361492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:11.270 [2024-10-08 18:20:29.361533] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:11.270 [2024-10-08 18:20:29.361551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.270 [2024-10-08 18:20:29.361562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.270 [2024-10-08 18:20:29.361586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.270 [2024-10-08 18:20:29.361596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.270 [2024-10-08 18:20:29.365299] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:11.270 [2024-10-08 18:20:29.365322] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:11.270 [2024-10-08 18:20:29.366064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:11.270 [2024-10-08 18:20:29.366150] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:11.270 [2024-10-08 18:20:29.366164] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:11.270 [2024-10-08 18:20:29.367070] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:11.270 [2024-10-08 18:20:29.367092] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:11.270 [2024-10-08 18:20:29.367150] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:11.270 [2024-10-08 18:20:29.369113] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:11.271 are Threshold: 0% 00:13:11.271 Life Percentage Used: 0% 00:13:11.271 Data Units Read: 0 00:13:11.271 Data Units Written: 0 00:13:11.271 Host Read Commands: 0 00:13:11.271 Host Write Commands: 0 00:13:11.271 Controller Busy Time: 0 minutes 00:13:11.271 Power Cycles: 0 00:13:11.271 Power On Hours: 0 hours 00:13:11.271 Unsafe Shutdowns: 0 00:13:11.271 Unrecoverable Media Errors: 0 00:13:11.271 Lifetime Error Log Entries: 0 00:13:11.271 Warning Temperature Time: 0 minutes 00:13:11.271 Critical Temperature Time: 0 minutes 00:13:11.271 00:13:11.271 Number of Queues 00:13:11.271 ================ 00:13:11.271 Number of I/O Submission Queues: 127 00:13:11.271 Number of I/O Completion Queues: 127 00:13:11.271 00:13:11.271 Active Namespaces 00:13:11.271 ================= 00:13:11.271 Namespace ID:1 00:13:11.271 Error Recovery Timeout: Unlimited 00:13:11.271 Command Set Identifier: NVM (00h) 00:13:11.271 Deallocate: Supported 00:13:11.271 Deallocated/Unwritten Error: Not Supported 00:13:11.271 Deallocated Read Value: Unknown 00:13:11.271 Deallocate in Write Zeroes: Not Supported 00:13:11.271 Deallocated Guard Field: 0xFFFF 00:13:11.271 Flush: Supported 00:13:11.271 Reservation: Supported 00:13:11.271 Namespace Sharing Capabilities: Multiple Controllers 00:13:11.271 Size (in LBAs): 131072 (0GiB) 00:13:11.271 Capacity (in LBAs): 131072 (0GiB) 00:13:11.271 Utilization (in LBAs): 131072 (0GiB) 00:13:11.271 NGUID: 4F73AD7C9778419DB35820F28C963A88 00:13:11.271 UUID: 4f73ad7c-9778-419d-b358-20f28c963a88 00:13:11.271 Thin Provisioning: Not Supported 00:13:11.271 Per-NS Atomic Units: Yes 00:13:11.271 Atomic Boundary Size (Normal): 0 00:13:11.271 Atomic Boundary Size (PFail): 0 00:13:11.271 Atomic Boundary Offset: 0 00:13:11.271 Maximum Single Source Range Length: 65535 00:13:11.271 Maximum Copy Length: 65535 00:13:11.271 Maximum Source Range Count: 1 00:13:11.271 NGUID/EUI64 Never Reused: No 00:13:11.271 Namespace Write Protected: No 00:13:11.271 Number of LBA Formats: 1 00:13:11.271 Current LBA Format: LBA Format #00 00:13:11.271 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:11.271 00:13:11.271 18:20:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:11.574 [2024-10-08 18:20:29.602135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:16.860 Initializing NVMe Controllers 00:13:16.860 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:16.860 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:16.860 Initialization complete. Launching workers. 00:13:16.860 ======================================================== 00:13:16.860 Latency(us) 00:13:16.860 Device Information : IOPS MiB/s Average min max 00:13:16.860 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 31017.00 121.16 4128.05 1228.30 10317.34 00:13:16.860 ======================================================== 00:13:16.860 Total : 31017.00 121.16 4128.05 1228.30 10317.34 00:13:16.860 00:13:16.860 [2024-10-08 18:20:34.624440] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:16.860 18:20:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:16.860 [2024-10-08 18:20:34.876674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.123 Initializing NVMe Controllers 00:13:22.123 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:22.123 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:22.123 Initialization complete. Launching workers. 00:13:22.123 ======================================================== 00:13:22.123 Latency(us) 00:13:22.123 Device Information : IOPS MiB/s Average min max 00:13:22.123 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15961.37 62.35 8024.56 4138.83 15974.68 00:13:22.123 ======================================================== 00:13:22.123 Total : 15961.37 62.35 8024.56 4138.83 15974.68 00:13:22.123 00:13:22.123 [2024-10-08 18:20:39.919188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.123 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:22.123 [2024-10-08 18:20:40.141411] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:27.384 [2024-10-08 18:20:45.221707] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:27.384 Initializing NVMe Controllers 00:13:27.384 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:27.384 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:27.384 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:27.384 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:27.384 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:27.384 Initialization complete. Launching workers. 00:13:27.384 Starting thread on core 2 00:13:27.384 Starting thread on core 3 00:13:27.384 Starting thread on core 1 00:13:27.384 18:20:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:27.384 [2024-10-08 18:20:45.534757] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:30.668 [2024-10-08 18:20:48.595297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:30.668 Initializing NVMe Controllers 00:13:30.668 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:30.668 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:30.668 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:30.668 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:30.668 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:30.668 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:30.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:30.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:30.668 Initialization complete. Launching workers. 00:13:30.668 Starting thread on core 1 with urgent priority queue 00:13:30.668 Starting thread on core 2 with urgent priority queue 00:13:30.668 Starting thread on core 3 with urgent priority queue 00:13:30.668 Starting thread on core 0 with urgent priority queue 00:13:30.668 SPDK bdev Controller (SPDK1 ) core 0: 4905.67 IO/s 20.38 secs/100000 ios 00:13:30.668 SPDK bdev Controller (SPDK1 ) core 1: 5291.33 IO/s 18.90 secs/100000 ios 00:13:30.668 SPDK bdev Controller (SPDK1 ) core 2: 5139.67 IO/s 19.46 secs/100000 ios 00:13:30.668 SPDK bdev Controller (SPDK1 ) core 3: 5228.33 IO/s 19.13 secs/100000 ios 00:13:30.668 ======================================================== 00:13:30.668 00:13:30.668 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:30.668 [2024-10-08 18:20:48.898859] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:30.668 Initializing NVMe Controllers 00:13:30.668 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:30.668 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:30.668 Namespace ID: 1 size: 0GB 00:13:30.668 Initialization complete. 00:13:30.668 INFO: using host memory buffer for IO 00:13:30.668 Hello world! 00:13:30.668 [2024-10-08 18:20:48.932506] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:30.926 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:31.184 [2024-10-08 18:20:49.237751] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:32.118 Initializing NVMe Controllers 00:13:32.118 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:32.118 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:32.118 Initialization complete. Launching workers. 00:13:32.118 submit (in ns) avg, min, max = 8010.6, 3555.6, 4003982.2 00:13:32.118 complete (in ns) avg, min, max = 26207.5, 2083.3, 4996990.0 00:13:32.118 00:13:32.118 Submit histogram 00:13:32.118 ================ 00:13:32.118 Range in us Cumulative Count 00:13:32.118 3.556 - 3.579: 0.7409% ( 95) 00:13:32.118 3.579 - 3.603: 3.4082% ( 342) 00:13:32.118 3.603 - 3.627: 9.8269% ( 823) 00:13:32.118 3.627 - 3.650: 19.7551% ( 1273) 00:13:32.118 3.650 - 3.674: 30.2527% ( 1346) 00:13:32.118 3.674 - 3.698: 37.3733% ( 913) 00:13:32.118 3.698 - 3.721: 43.2226% ( 750) 00:13:32.118 3.721 - 3.745: 48.7911% ( 714) 00:13:32.118 3.745 - 3.769: 55.0382% ( 801) 00:13:32.118 3.769 - 3.793: 60.6146% ( 715) 00:13:32.118 3.793 - 3.816: 64.3425% ( 478) 00:13:32.118 3.816 - 3.840: 67.3920% ( 391) 00:13:32.118 3.840 - 3.864: 71.5489% ( 533) 00:13:32.118 3.864 - 3.887: 75.9008% ( 558) 00:13:32.118 3.887 - 3.911: 80.1201% ( 541) 00:13:32.118 3.911 - 3.935: 83.1774% ( 392) 00:13:32.118 3.935 - 3.959: 85.2519% ( 266) 00:13:32.118 3.959 - 3.982: 87.1861% ( 248) 00:13:32.118 3.982 - 4.006: 89.2217% ( 261) 00:13:32.118 4.006 - 4.030: 90.4617% ( 159) 00:13:32.118 4.030 - 4.053: 91.6160% ( 148) 00:13:32.118 4.053 - 4.077: 92.7000% ( 139) 00:13:32.118 4.077 - 4.101: 93.7997% ( 141) 00:13:32.118 4.101 - 4.124: 94.4860% ( 88) 00:13:32.118 4.124 - 4.148: 95.2270% ( 95) 00:13:32.118 4.148 - 4.172: 95.7105% ( 62) 00:13:32.118 4.172 - 4.196: 96.0615% ( 45) 00:13:32.118 4.196 - 4.219: 96.3266% ( 34) 00:13:32.118 4.219 - 4.243: 96.4280% ( 13) 00:13:32.118 4.243 - 4.267: 96.5996% ( 22) 00:13:32.118 4.267 - 4.290: 96.7478% ( 19) 00:13:32.118 4.290 - 4.314: 96.8882% ( 18) 00:13:32.118 4.314 - 4.338: 97.0597% ( 22) 00:13:32.118 4.338 - 4.361: 97.1533% ( 12) 00:13:32.118 4.361 - 4.385: 97.2469% ( 12) 00:13:32.118 4.385 - 4.409: 97.2781% ( 4) 00:13:32.118 4.409 - 4.433: 97.3015% ( 3) 00:13:32.118 4.433 - 4.456: 97.3327% ( 4) 00:13:32.118 4.456 - 4.480: 97.3405% ( 1) 00:13:32.118 4.480 - 4.504: 97.3639% ( 3) 00:13:32.118 4.504 - 4.527: 97.3795% ( 2) 00:13:32.118 4.527 - 4.551: 97.3951% ( 2) 00:13:32.118 4.575 - 4.599: 97.4107% ( 2) 00:13:32.118 4.599 - 4.622: 97.4185% ( 1) 00:13:32.118 4.693 - 4.717: 97.4419% ( 3) 00:13:32.118 4.717 - 4.741: 97.4497% ( 1) 00:13:32.118 4.741 - 4.764: 97.4731% ( 3) 00:13:32.118 4.764 - 4.788: 97.5199% ( 6) 00:13:32.118 4.788 - 4.812: 97.5667% ( 6) 00:13:32.118 4.812 - 4.836: 97.6213% ( 7) 00:13:32.118 4.836 - 4.859: 97.6603% ( 5) 00:13:32.118 4.859 - 4.883: 97.7149% ( 7) 00:13:32.118 4.883 - 4.907: 97.7617% ( 6) 00:13:32.118 4.907 - 4.930: 97.8085% ( 6) 00:13:32.118 4.930 - 4.954: 97.8241% ( 2) 00:13:32.118 4.954 - 4.978: 97.8864% ( 8) 00:13:32.118 4.978 - 5.001: 97.9410% ( 7) 00:13:32.118 5.001 - 5.025: 97.9722% ( 4) 00:13:32.118 5.025 - 5.049: 98.0112% ( 5) 00:13:32.118 5.049 - 5.073: 98.0268% ( 2) 00:13:32.118 5.073 - 5.096: 98.0580% ( 4) 00:13:32.118 5.096 - 5.120: 98.0658% ( 1) 00:13:32.118 5.120 - 5.144: 98.0814% ( 2) 00:13:32.118 5.144 - 5.167: 98.1048% ( 3) 00:13:32.118 5.167 - 5.191: 98.1282% ( 3) 00:13:32.118 5.215 - 5.239: 98.1438% ( 2) 00:13:32.118 5.262 - 5.286: 98.1516% ( 1) 00:13:32.118 5.286 - 5.310: 98.1750% ( 3) 00:13:32.118 5.310 - 5.333: 98.1906% ( 2) 00:13:32.118 5.333 - 5.357: 98.1984% ( 1) 00:13:32.118 5.357 - 5.381: 98.2296% ( 4) 00:13:32.118 5.381 - 5.404: 98.2374% ( 1) 00:13:32.118 5.452 - 5.476: 98.2452% ( 1) 00:13:32.118 5.476 - 5.499: 98.2530% ( 1) 00:13:32.118 5.570 - 5.594: 98.2608% ( 1) 00:13:32.118 5.594 - 5.618: 98.2686% ( 1) 00:13:32.118 5.760 - 5.784: 98.2764% ( 1) 00:13:32.118 5.831 - 5.855: 98.2842% ( 1) 00:13:32.118 5.855 - 5.879: 98.2920% ( 1) 00:13:32.118 5.926 - 5.950: 98.2998% ( 1) 00:13:32.118 5.997 - 6.021: 98.3076% ( 1) 00:13:32.119 6.044 - 6.068: 98.3154% ( 1) 00:13:32.119 6.068 - 6.116: 98.3232% ( 1) 00:13:32.119 6.163 - 6.210: 98.3466% ( 3) 00:13:32.119 6.210 - 6.258: 98.3622% ( 2) 00:13:32.119 6.258 - 6.305: 98.3700% ( 1) 00:13:32.119 6.353 - 6.400: 98.3778% ( 1) 00:13:32.119 6.684 - 6.732: 98.3856% ( 1) 00:13:32.119 6.874 - 6.921: 98.3934% ( 1) 00:13:32.119 7.016 - 7.064: 98.4168% ( 3) 00:13:32.119 7.064 - 7.111: 98.4246% ( 1) 00:13:32.119 7.348 - 7.396: 98.4324% ( 1) 00:13:32.119 7.396 - 7.443: 98.4402% ( 1) 00:13:32.119 7.490 - 7.538: 98.4480% ( 1) 00:13:32.119 7.538 - 7.585: 98.4558% ( 1) 00:13:32.119 7.585 - 7.633: 98.4636% ( 1) 00:13:32.119 7.633 - 7.680: 98.4714% ( 1) 00:13:32.119 7.727 - 7.775: 98.4792% ( 1) 00:13:32.119 7.917 - 7.964: 98.4870% ( 1) 00:13:32.119 8.059 - 8.107: 98.4948% ( 1) 00:13:32.119 8.107 - 8.154: 98.5026% ( 1) 00:13:32.119 8.249 - 8.296: 98.5182% ( 2) 00:13:32.119 8.296 - 8.344: 98.5416% ( 3) 00:13:32.119 8.344 - 8.391: 98.5494% ( 1) 00:13:32.119 8.581 - 8.628: 98.5650% ( 2) 00:13:32.119 8.723 - 8.770: 98.5728% ( 1) 00:13:32.119 8.770 - 8.818: 98.5806% ( 1) 00:13:32.119 8.865 - 8.913: 98.5884% ( 1) 00:13:32.119 8.913 - 8.960: 98.5962% ( 1) 00:13:32.119 9.055 - 9.102: 98.6118% ( 2) 00:13:32.119 9.102 - 9.150: 98.6196% ( 1) 00:13:32.119 9.197 - 9.244: 98.6274% ( 1) 00:13:32.119 9.244 - 9.292: 98.6508% ( 3) 00:13:32.119 9.339 - 9.387: 98.6664% ( 2) 00:13:32.119 9.576 - 9.624: 98.6742% ( 1) 00:13:32.119 9.624 - 9.671: 98.6820% ( 1) 00:13:32.119 9.719 - 9.766: 98.6898% ( 1) 00:13:32.119 9.861 - 9.908: 98.7054% ( 2) 00:13:32.119 9.908 - 9.956: 98.7131% ( 1) 00:13:32.119 10.003 - 10.050: 98.7209% ( 1) 00:13:32.119 10.145 - 10.193: 98.7287% ( 1) 00:13:32.119 10.193 - 10.240: 98.7365% ( 1) 00:13:32.119 10.382 - 10.430: 98.7443% ( 1) 00:13:32.119 10.667 - 10.714: 98.7521% ( 1) 00:13:32.119 10.714 - 10.761: 98.7599% ( 1) 00:13:32.119 10.856 - 10.904: 98.7677% ( 1) 00:13:32.119 10.904 - 10.951: 98.7755% ( 1) 00:13:32.119 11.046 - 11.093: 98.7833% ( 1) 00:13:32.119 11.188 - 11.236: 98.7911% ( 1) 00:13:32.119 11.283 - 11.330: 98.7989% ( 1) 00:13:32.119 11.473 - 11.520: 98.8067% ( 1) 00:13:32.119 11.710 - 11.757: 98.8223% ( 2) 00:13:32.119 12.231 - 12.326: 98.8301% ( 1) 00:13:32.119 12.326 - 12.421: 98.8379% ( 1) 00:13:32.119 12.516 - 12.610: 98.8457% ( 1) 00:13:32.119 12.610 - 12.705: 98.8535% ( 1) 00:13:32.119 12.705 - 12.800: 98.8691% ( 2) 00:13:32.119 12.800 - 12.895: 98.8847% ( 2) 00:13:32.119 13.179 - 13.274: 98.8925% ( 1) 00:13:32.119 13.274 - 13.369: 98.9003% ( 1) 00:13:32.119 13.748 - 13.843: 98.9081% ( 1) 00:13:32.119 13.843 - 13.938: 98.9159% ( 1) 00:13:32.119 14.601 - 14.696: 98.9315% ( 2) 00:13:32.119 15.170 - 15.265: 98.9393% ( 1) 00:13:32.119 17.161 - 17.256: 98.9471% ( 1) 00:13:32.119 17.256 - 17.351: 98.9627% ( 2) 00:13:32.119 17.351 - 17.446: 98.9705% ( 1) 00:13:32.119 17.446 - 17.541: 99.0017% ( 4) 00:13:32.119 17.541 - 17.636: 99.0329% ( 4) 00:13:32.119 17.636 - 17.730: 99.1343% ( 13) 00:13:32.119 17.730 - 17.825: 99.1811% ( 6) 00:13:32.119 17.825 - 17.920: 99.2279% ( 6) 00:13:32.119 17.920 - 18.015: 99.3293% ( 13) 00:13:32.119 18.015 - 18.110: 99.3839% ( 7) 00:13:32.119 18.110 - 18.204: 99.4229% ( 5) 00:13:32.119 18.204 - 18.299: 99.4775% ( 7) 00:13:32.119 18.299 - 18.394: 99.5788% ( 13) 00:13:32.119 18.394 - 18.489: 99.6412% ( 8) 00:13:32.119 18.489 - 18.584: 99.6802% ( 5) 00:13:32.119 18.584 - 18.679: 99.7114% ( 4) 00:13:32.119 18.679 - 18.773: 99.7660% ( 7) 00:13:32.119 18.773 - 18.868: 99.7972% ( 4) 00:13:32.119 18.868 - 18.963: 99.8128% ( 2) 00:13:32.119 18.963 - 19.058: 99.8206% ( 1) 00:13:32.119 19.058 - 19.153: 99.8284% ( 1) 00:13:32.119 19.247 - 19.342: 99.8440% ( 2) 00:13:32.119 19.342 - 19.437: 99.8596% ( 2) 00:13:32.119 20.290 - 20.385: 99.8674% ( 1) 00:13:32.119 21.428 - 21.523: 99.8752% ( 1) 00:13:32.119 23.419 - 23.514: 99.8830% ( 1) 00:13:32.119 23.704 - 23.799: 99.8908% ( 1) 00:13:32.119 24.178 - 24.273: 99.8986% ( 1) 00:13:32.119 3859.342 - 3883.615: 99.9064% ( 1) 00:13:32.119 3980.705 - 4004.978: 100.0000% ( 12) 00:13:32.119 00:13:32.119 Complete histogram 00:13:32.119 ================== 00:13:32.119 Range in us Cumulative Count 00:13:32.119 2.074 - 2.086: 0.1638% ( 21) 00:13:32.119 2.086 - 2.098: 12.6813% ( 1605) 00:13:32.119 2.098 - 2.110: 24.9961% ( 1579) 00:13:32.119 2.110 - 2.121: 28.2483% ( 417) 00:13:32.119 2.121 - 2.133: 44.8526% ( 2129) 00:13:32.119 2.133 - 2.145: 52.1915% ( 941) 00:13:32.119 2.145 - 2.157: 54.8198% ( 337) 00:13:32.119 2.157 - 2.169: 63.5938% ( 1125) 00:13:32.119 2.169 - 2.181: 67.1268% ( 453) 00:13:32.119 2.181 - 2.193: 70.0203% ( 371) 00:13:32.119 2.193 - 2.204: 79.1530% ( 1171) 00:13:32.119 2.204 - 2.216: 82.0309% ( 369) 00:13:32.119 2.216 - 2.228: 83.3801% ( 173) 00:13:32.119 2.228 - 2.240: 85.7589% ( 305) 00:13:32.119 2.240 - 2.252: 88.4651% ( 347) 00:13:32.119 2.252 - 2.264: 89.7910% ( 170) 00:13:32.119 2.264 - 2.276: 91.7252% ( 248) 00:13:32.119 2.276 - 2.287: 93.1992% ( 189) 00:13:32.119 2.287 - 2.299: 93.5579% ( 46) 00:13:32.119 2.299 - 2.311: 94.0415% ( 62) 00:13:32.119 2.311 - 2.323: 94.7902% ( 96) 00:13:32.119 2.323 - 2.335: 95.2582% ( 60) 00:13:32.119 2.335 - 2.347: 95.3829% ( 16) 00:13:32.119 2.347 - 2.359: 95.5545% ( 22) 00:13:32.119 2.359 - 2.370: 95.6715% ( 15) 00:13:32.119 2.370 - 2.382: 95.7105% ( 5) 00:13:32.119 2.382 - 2.394: 95.8119% ( 13) 00:13:32.119 2.394 - 2.406: 96.1472% ( 43) 00:13:32.119 2.406 - 2.418: 96.4202% ( 35) 00:13:32.119 2.418 - 2.430: 96.7400% ( 41) 00:13:32.119 2.430 - 2.441: 97.0129% ( 35) 00:13:32.119 2.441 - 2.453: 97.2781% ( 34) 00:13:32.119 2.453 - 2.465: 97.4887% ( 27) 00:13:32.119 2.465 - 2.477: 97.6603% ( 22) 00:13:32.119 2.477 - 2.489: 97.8474% ( 24) 00:13:32.119 2.489 - 2.501: 97.9566% ( 14) 00:13:32.119 2.501 - 2.513: 98.0580% ( 13) 00:13:32.119 2.513 - 2.524: 98.1516% ( 12) 00:13:32.119 2.524 - 2.536: 98.1984% ( 6) 00:13:32.119 2.536 - 2.548: 98.2218% ( 3) 00:13:32.119 2.548 - 2.560: 98.3154% ( 12) 00:13:32.119 2.560 - 2.572: 98.3466% ( 4) 00:13:32.119 2.584 - 2.596: 98.3544% ( 1) 00:13:32.119 2.596 - 2.607: 98.3622% ( 1) 00:13:32.119 2.619 - 2.631: 98.3778% ( 2) 00:13:32.119 2.631 - 2.643: 98.3856% ( 1) 00:13:32.119 2.643 - 2.655: 98.3934% ( 1) 00:13:32.119 2.679 - 2.690: 98.4090% ( 2) 00:13:32.119 2.714 - 2.726: 98.4168% ( 1) 00:13:32.119 2.738 - 2.750: 98.4246% ( 1) 00:13:32.119 2.750 - 2.761: 98.4402% ( 2) 00:13:32.119 2.773 - 2.785: 98.4480% ( 1) 00:13:32.119 2.880 - 2.892: 98.4558% ( 1) 00:13:32.119 2.987 - 2.999: 98.4636% ( 1) 00:13:32.119 3.081 - 3.105: 98.4714% ( 1) 00:13:32.119 3.366 - 3.390: 98.4792% ( 1) 00:13:32.119 3.413 - 3.437: 98.4870% ( 1) 00:13:32.119 3.437 - 3.461: 98.4948% ( 1) 00:13:32.119 3.461 - 3.484: 98.5026% ( 1) 00:13:32.119 3.508 - 3.532: 98.5104% ( 1) 00:13:32.119 3.532 - 3.556: 98.5182% ( 1) 00:13:32.119 3.556 - 3.579: 98.5260% ( 1) 00:13:32.119 3.579 - 3.603: 98.5494% ( 3) 00:13:32.119 3.603 - 3.627: 98.5650% ( 2) 00:13:32.119 3.627 - 3.650: 98.5806% ( 2) 00:13:32.119 3.650 - 3.674: 98.5884% ( 1) 00:13:32.119 3.674 - 3.698: 98.5962% ( 1) 00:13:32.119 3.698 - 3.721: 98.6040% ( 1) 00:13:32.119 3.721 - 3.745: 98.6118% ( 1) 00:13:32.119 3.769 - 3.793: 98.6274% ( 2) 00:13:32.119 3.793 - 3.816: 98.6586% ( 4) 00:13:32.119 3.816 - 3.840: 98.6742% ( 2) 00:13:32.119 3.864 - 3.887: 98.6820% ( 1) 00:13:32.119 3.887 - 3.911: 98.6976% ( 2) 00:13:32.119 3.911 - 3.935: 98.7054% ( 1) 00:13:32.119 3.982 - 4.006: 98.7209% ( 2) 00:13:32.119 4.172 - 4.196: 98.7287% ( 1) 00:13:32.119 4.219 - 4.243: 9[2024-10-08 18:20:50.259990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:32.119 8.7365% ( 1) 00:13:32.119 5.831 - 5.855: 98.7443% ( 1) 00:13:32.119 5.855 - 5.879: 98.7521% ( 1) 00:13:32.119 5.879 - 5.902: 98.7599% ( 1) 00:13:32.119 6.258 - 6.305: 98.7677% ( 1) 00:13:32.119 6.495 - 6.542: 98.7755% ( 1) 00:13:32.119 6.779 - 6.827: 98.7833% ( 1) 00:13:32.119 6.827 - 6.874: 98.7911% ( 1) 00:13:32.119 7.348 - 7.396: 98.7989% ( 1) 00:13:32.119 7.490 - 7.538: 98.8067% ( 1) 00:13:32.119 7.822 - 7.870: 98.8145% ( 1) 00:13:32.119 8.296 - 8.344: 98.8223% ( 1) 00:13:32.119 8.344 - 8.391: 98.8301% ( 1) 00:13:32.119 9.007 - 9.055: 98.8379% ( 1) 00:13:32.119 10.382 - 10.430: 98.8457% ( 1) 00:13:32.119 10.477 - 10.524: 98.8535% ( 1) 00:13:32.119 10.524 - 10.572: 98.8613% ( 1) 00:13:32.119 12.516 - 12.610: 98.8691% ( 1) 00:13:32.119 15.455 - 15.550: 98.8769% ( 1) 00:13:32.119 15.644 - 15.739: 98.9003% ( 3) 00:13:32.119 15.739 - 15.834: 98.9237% ( 3) 00:13:32.119 15.834 - 15.929: 98.9471% ( 3) 00:13:32.120 15.929 - 16.024: 98.9627% ( 2) 00:13:32.120 16.024 - 16.119: 99.0095% ( 6) 00:13:32.120 16.119 - 16.213: 99.0407% ( 4) 00:13:32.120 16.213 - 16.308: 99.0875% ( 6) 00:13:32.120 16.308 - 16.403: 99.1187% ( 4) 00:13:32.120 16.403 - 16.498: 99.1421% ( 3) 00:13:32.120 16.593 - 16.687: 99.1655% ( 3) 00:13:32.120 16.687 - 16.782: 99.2279% ( 8) 00:13:32.120 16.782 - 16.877: 99.2747% ( 6) 00:13:32.120 16.877 - 16.972: 99.2981% ( 3) 00:13:32.120 16.972 - 17.067: 99.3293% ( 4) 00:13:32.120 17.067 - 17.161: 99.3371% ( 1) 00:13:32.120 17.161 - 17.256: 99.3449% ( 1) 00:13:32.120 17.351 - 17.446: 99.3527% ( 1) 00:13:32.120 17.446 - 17.541: 99.3605% ( 1) 00:13:32.120 17.920 - 18.015: 99.3683% ( 1) 00:13:32.120 18.015 - 18.110: 99.3761% ( 1) 00:13:32.120 18.110 - 18.204: 99.3839% ( 1) 00:13:32.120 18.489 - 18.584: 99.3917% ( 1) 00:13:32.120 23.419 - 23.514: 99.3995% ( 1) 00:13:32.120 3034.074 - 3046.210: 99.4073% ( 1) 00:13:32.120 3131.164 - 3155.437: 99.4151% ( 1) 00:13:32.120 3980.705 - 4004.978: 99.9142% ( 64) 00:13:32.120 4004.978 - 4029.250: 99.9922% ( 10) 00:13:32.120 4975.881 - 5000.154: 100.0000% ( 1) 00:13:32.120 00:13:32.120 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:32.120 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:32.120 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:32.120 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:32.120 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:32.378 [ 00:13:32.378 { 00:13:32.378 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:32.378 "subtype": "Discovery", 00:13:32.378 "listen_addresses": [], 00:13:32.378 "allow_any_host": true, 00:13:32.378 "hosts": [] 00:13:32.378 }, 00:13:32.378 { 00:13:32.378 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:32.378 "subtype": "NVMe", 00:13:32.378 "listen_addresses": [ 00:13:32.378 { 00:13:32.378 "trtype": "VFIOUSER", 00:13:32.378 "adrfam": "IPv4", 00:13:32.378 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:32.378 "trsvcid": "0" 00:13:32.378 } 00:13:32.378 ], 00:13:32.378 "allow_any_host": true, 00:13:32.378 "hosts": [], 00:13:32.378 "serial_number": "SPDK1", 00:13:32.378 "model_number": "SPDK bdev Controller", 00:13:32.378 "max_namespaces": 32, 00:13:32.378 "min_cntlid": 1, 00:13:32.378 "max_cntlid": 65519, 00:13:32.378 "namespaces": [ 00:13:32.378 { 00:13:32.378 "nsid": 1, 00:13:32.378 "bdev_name": "Malloc1", 00:13:32.378 "name": "Malloc1", 00:13:32.378 "nguid": "4F73AD7C9778419DB35820F28C963A88", 00:13:32.378 "uuid": "4f73ad7c-9778-419d-b358-20f28c963a88" 00:13:32.378 } 00:13:32.378 ] 00:13:32.378 }, 00:13:32.378 { 00:13:32.378 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:32.378 "subtype": "NVMe", 00:13:32.378 "listen_addresses": [ 00:13:32.378 { 00:13:32.378 "trtype": "VFIOUSER", 00:13:32.378 "adrfam": "IPv4", 00:13:32.378 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:32.378 "trsvcid": "0" 00:13:32.378 } 00:13:32.378 ], 00:13:32.378 "allow_any_host": true, 00:13:32.378 "hosts": [], 00:13:32.378 "serial_number": "SPDK2", 00:13:32.378 "model_number": "SPDK bdev Controller", 00:13:32.378 "max_namespaces": 32, 00:13:32.378 "min_cntlid": 1, 00:13:32.378 "max_cntlid": 65519, 00:13:32.378 "namespaces": [ 00:13:32.378 { 00:13:32.378 "nsid": 1, 00:13:32.378 "bdev_name": "Malloc2", 00:13:32.378 "name": "Malloc2", 00:13:32.378 "nguid": "8E9E39DC8C2B467B91ACFDD4DAF72108", 00:13:32.378 "uuid": "8e9e39dc-8c2b-467b-91ac-fdd4daf72108" 00:13:32.378 } 00:13:32.378 ] 00:13:32.378 } 00:13:32.378 ] 00:13:32.378 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:32.378 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1225882 00:13:32.378 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:32.379 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:32.379 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:32.379 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:32.379 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:32.379 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:32.379 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:32.379 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:32.637 [2024-10-08 18:20:50.756954] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:32.637 Malloc3 00:13:32.895 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:32.895 [2024-10-08 18:20:51.182028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.152 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:33.152 Asynchronous Event Request test 00:13:33.152 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.152 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.152 Registering asynchronous event callbacks... 00:13:33.152 Starting namespace attribute notice tests for all controllers... 00:13:33.152 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:33.152 aer_cb - Changed Namespace 00:13:33.152 Cleaning up... 00:13:33.412 [ 00:13:33.412 { 00:13:33.412 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:33.412 "subtype": "Discovery", 00:13:33.412 "listen_addresses": [], 00:13:33.412 "allow_any_host": true, 00:13:33.412 "hosts": [] 00:13:33.412 }, 00:13:33.412 { 00:13:33.412 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:33.412 "subtype": "NVMe", 00:13:33.412 "listen_addresses": [ 00:13:33.412 { 00:13:33.412 "trtype": "VFIOUSER", 00:13:33.412 "adrfam": "IPv4", 00:13:33.412 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:33.412 "trsvcid": "0" 00:13:33.412 } 00:13:33.412 ], 00:13:33.412 "allow_any_host": true, 00:13:33.412 "hosts": [], 00:13:33.412 "serial_number": "SPDK1", 00:13:33.412 "model_number": "SPDK bdev Controller", 00:13:33.412 "max_namespaces": 32, 00:13:33.412 "min_cntlid": 1, 00:13:33.412 "max_cntlid": 65519, 00:13:33.412 "namespaces": [ 00:13:33.412 { 00:13:33.412 "nsid": 1, 00:13:33.412 "bdev_name": "Malloc1", 00:13:33.412 "name": "Malloc1", 00:13:33.412 "nguid": "4F73AD7C9778419DB35820F28C963A88", 00:13:33.412 "uuid": "4f73ad7c-9778-419d-b358-20f28c963a88" 00:13:33.412 }, 00:13:33.412 { 00:13:33.412 "nsid": 2, 00:13:33.412 "bdev_name": "Malloc3", 00:13:33.412 "name": "Malloc3", 00:13:33.412 "nguid": "CE91289229044719A15716B323C91530", 00:13:33.412 "uuid": "ce912892-2904-4719-a157-16b323c91530" 00:13:33.412 } 00:13:33.412 ] 00:13:33.412 }, 00:13:33.412 { 00:13:33.412 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:33.412 "subtype": "NVMe", 00:13:33.412 "listen_addresses": [ 00:13:33.412 { 00:13:33.412 "trtype": "VFIOUSER", 00:13:33.412 "adrfam": "IPv4", 00:13:33.412 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:33.412 "trsvcid": "0" 00:13:33.412 } 00:13:33.412 ], 00:13:33.412 "allow_any_host": true, 00:13:33.412 "hosts": [], 00:13:33.412 "serial_number": "SPDK2", 00:13:33.412 "model_number": "SPDK bdev Controller", 00:13:33.412 "max_namespaces": 32, 00:13:33.412 "min_cntlid": 1, 00:13:33.412 "max_cntlid": 65519, 00:13:33.412 "namespaces": [ 00:13:33.412 { 00:13:33.412 "nsid": 1, 00:13:33.412 "bdev_name": "Malloc2", 00:13:33.412 "name": "Malloc2", 00:13:33.412 "nguid": "8E9E39DC8C2B467B91ACFDD4DAF72108", 00:13:33.412 "uuid": "8e9e39dc-8c2b-467b-91ac-fdd4daf72108" 00:13:33.412 } 00:13:33.412 ] 00:13:33.412 } 00:13:33.412 ] 00:13:33.412 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1225882 00:13:33.412 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:33.412 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:33.412 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:33.412 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:33.412 [2024-10-08 18:20:51.482626] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:13:33.412 [2024-10-08 18:20:51.482669] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226009 ] 00:13:33.412 [2024-10-08 18:20:51.517390] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:33.412 [2024-10-08 18:20:51.526468] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:33.412 [2024-10-08 18:20:51.526502] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdd3fa8e000 00:13:33.412 [2024-10-08 18:20:51.527461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.412 [2024-10-08 18:20:51.528470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.412 [2024-10-08 18:20:51.529478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.412 [2024-10-08 18:20:51.530487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:33.412 [2024-10-08 18:20:51.531491] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:33.412 [2024-10-08 18:20:51.532497] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.412 [2024-10-08 18:20:51.533507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:33.412 [2024-10-08 18:20:51.534512] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:33.412 [2024-10-08 18:20:51.535520] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:33.412 [2024-10-08 18:20:51.535542] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdd3fa83000 00:13:33.412 [2024-10-08 18:20:51.536896] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:33.412 [2024-10-08 18:20:51.557207] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:33.412 [2024-10-08 18:20:51.557245] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:33.412 [2024-10-08 18:20:51.559363] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:33.412 [2024-10-08 18:20:51.559417] nvme_pcie_common.c: 149:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:33.412 [2024-10-08 18:20:51.559506] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:33.412 [2024-10-08 18:20:51.559532] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:33.412 [2024-10-08 18:20:51.559546] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:33.413 [2024-10-08 18:20:51.560376] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:33.413 [2024-10-08 18:20:51.560397] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:33.413 [2024-10-08 18:20:51.560409] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:33.413 [2024-10-08 18:20:51.561383] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:33.413 [2024-10-08 18:20:51.561404] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:33.413 [2024-10-08 18:20:51.561418] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:33.413 [2024-10-08 18:20:51.562392] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:33.413 [2024-10-08 18:20:51.562413] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:33.413 [2024-10-08 18:20:51.563401] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:33.413 [2024-10-08 18:20:51.563421] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:33.413 [2024-10-08 18:20:51.563431] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:33.413 [2024-10-08 18:20:51.563443] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:33.413 [2024-10-08 18:20:51.563552] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:33.413 [2024-10-08 18:20:51.563560] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:33.413 [2024-10-08 18:20:51.563568] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:33.413 [2024-10-08 18:20:51.564407] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:33.413 [2024-10-08 18:20:51.565409] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:33.413 [2024-10-08 18:20:51.566421] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:33.413 [2024-10-08 18:20:51.567414] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:33.413 [2024-10-08 18:20:51.567499] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:33.413 [2024-10-08 18:20:51.568431] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:33.413 [2024-10-08 18:20:51.568452] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:33.413 [2024-10-08 18:20:51.568463] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.568499] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:33.413 [2024-10-08 18:20:51.568514] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.568532] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:33.413 [2024-10-08 18:20:51.568542] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:33.413 [2024-10-08 18:20:51.568549] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.413 [2024-10-08 18:20:51.568565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:33.413 [2024-10-08 18:20:51.577305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:33.413 [2024-10-08 18:20:51.577329] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:33.413 [2024-10-08 18:20:51.577338] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:33.413 [2024-10-08 18:20:51.577345] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:33.413 [2024-10-08 18:20:51.577355] nvme_ctrlr.c:2115:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:33.413 [2024-10-08 18:20:51.577363] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:33.413 [2024-10-08 18:20:51.577370] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:33.413 [2024-10-08 18:20:51.577378] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.577395] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.577413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:33.413 [2024-10-08 18:20:51.585301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:33.413 [2024-10-08 18:20:51.585325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.413 [2024-10-08 18:20:51.585339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.413 [2024-10-08 18:20:51.585358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.413 [2024-10-08 18:20:51.585370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:33.413 [2024-10-08 18:20:51.585379] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.585396] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.585411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:33.413 [2024-10-08 18:20:51.593300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:33.413 [2024-10-08 18:20:51.593317] nvme_ctrlr.c:3065:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:33.413 [2024-10-08 18:20:51.593340] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.593352] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.593365] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.593380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:33.413 [2024-10-08 18:20:51.601301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:33.413 [2024-10-08 18:20:51.601378] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.601393] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.601407] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:33.413 [2024-10-08 18:20:51.601415] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:33.413 [2024-10-08 18:20:51.601421] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.413 [2024-10-08 18:20:51.601431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:33.413 [2024-10-08 18:20:51.609302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:33.413 [2024-10-08 18:20:51.609325] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:33.413 [2024-10-08 18:20:51.609355] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.609369] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.609382] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:33.413 [2024-10-08 18:20:51.609391] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:33.413 [2024-10-08 18:20:51.609397] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.413 [2024-10-08 18:20:51.609407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:33.413 [2024-10-08 18:20:51.617300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:33.413 [2024-10-08 18:20:51.617328] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.617343] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.617357] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:33.413 [2024-10-08 18:20:51.617366] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:33.413 [2024-10-08 18:20:51.617372] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.413 [2024-10-08 18:20:51.617382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:33.413 [2024-10-08 18:20:51.625299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:33.413 [2024-10-08 18:20:51.625320] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.625332] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.625346] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.625356] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.625364] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.625372] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:33.413 [2024-10-08 18:20:51.625380] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:33.413 [2024-10-08 18:20:51.625387] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:33.414 [2024-10-08 18:20:51.625395] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:33.414 [2024-10-08 18:20:51.625419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:33.414 [2024-10-08 18:20:51.633298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:33.414 [2024-10-08 18:20:51.633324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:33.414 [2024-10-08 18:20:51.641300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:33.414 [2024-10-08 18:20:51.641325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:33.414 [2024-10-08 18:20:51.649298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:33.414 [2024-10-08 18:20:51.649324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:33.414 [2024-10-08 18:20:51.657313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:33.414 [2024-10-08 18:20:51.657345] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:33.414 [2024-10-08 18:20:51.657356] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:33.414 [2024-10-08 18:20:51.657362] nvme_pcie_common.c:1265:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:33.414 [2024-10-08 18:20:51.657368] nvme_pcie_common.c:1281:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:33.414 [2024-10-08 18:20:51.657374] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:33.414 [2024-10-08 18:20:51.657384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:33.414 [2024-10-08 18:20:51.657396] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:33.414 [2024-10-08 18:20:51.657404] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:33.414 [2024-10-08 18:20:51.657410] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.414 [2024-10-08 18:20:51.657425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:33.414 [2024-10-08 18:20:51.657437] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:33.414 [2024-10-08 18:20:51.657446] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:33.414 [2024-10-08 18:20:51.657452] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.414 [2024-10-08 18:20:51.657460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:33.414 [2024-10-08 18:20:51.657472] nvme_pcie_common.c:1228:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:33.414 [2024-10-08 18:20:51.657480] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:33.414 [2024-10-08 18:20:51.657486] nvme_pcie_common.c:1316:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:33.414 [2024-10-08 18:20:51.657495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:33.414 [2024-10-08 18:20:51.665301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:33.414 [2024-10-08 18:20:51.665329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:33.414 [2024-10-08 18:20:51.665365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:33.414 [2024-10-08 18:20:51.665378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:33.414 ===================================================== 00:13:33.414 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:33.414 ===================================================== 00:13:33.414 Controller Capabilities/Features 00:13:33.414 ================================ 00:13:33.414 Vendor ID: 4e58 00:13:33.414 Subsystem Vendor ID: 4e58 00:13:33.414 Serial Number: SPDK2 00:13:33.414 Model Number: SPDK bdev Controller 00:13:33.414 Firmware Version: 25.01 00:13:33.414 Recommended Arb Burst: 6 00:13:33.414 IEEE OUI Identifier: 8d 6b 50 00:13:33.414 Multi-path I/O 00:13:33.414 May have multiple subsystem ports: Yes 00:13:33.414 May have multiple controllers: Yes 00:13:33.414 Associated with SR-IOV VF: No 00:13:33.414 Max Data Transfer Size: 131072 00:13:33.414 Max Number of Namespaces: 32 00:13:33.414 Max Number of I/O Queues: 127 00:13:33.414 NVMe Specification Version (VS): 1.3 00:13:33.414 NVMe Specification Version (Identify): 1.3 00:13:33.414 Maximum Queue Entries: 256 00:13:33.414 Contiguous Queues Required: Yes 00:13:33.414 Arbitration Mechanisms Supported 00:13:33.414 Weighted Round Robin: Not Supported 00:13:33.414 Vendor Specific: Not Supported 00:13:33.414 Reset Timeout: 15000 ms 00:13:33.414 Doorbell Stride: 4 bytes 00:13:33.414 NVM Subsystem Reset: Not Supported 00:13:33.414 Command Sets Supported 00:13:33.414 NVM Command Set: Supported 00:13:33.414 Boot Partition: Not Supported 00:13:33.414 Memory Page Size Minimum: 4096 bytes 00:13:33.414 Memory Page Size Maximum: 4096 bytes 00:13:33.414 Persistent Memory Region: Not Supported 00:13:33.414 Optional Asynchronous Events Supported 00:13:33.414 Namespace Attribute Notices: Supported 00:13:33.414 Firmware Activation Notices: Not Supported 00:13:33.414 ANA Change Notices: Not Supported 00:13:33.414 PLE Aggregate Log Change Notices: Not Supported 00:13:33.414 LBA Status Info Alert Notices: Not Supported 00:13:33.414 EGE Aggregate Log Change Notices: Not Supported 00:13:33.414 Normal NVM Subsystem Shutdown event: Not Supported 00:13:33.414 Zone Descriptor Change Notices: Not Supported 00:13:33.414 Discovery Log Change Notices: Not Supported 00:13:33.414 Controller Attributes 00:13:33.414 128-bit Host Identifier: Supported 00:13:33.414 Non-Operational Permissive Mode: Not Supported 00:13:33.414 NVM Sets: Not Supported 00:13:33.414 Read Recovery Levels: Not Supported 00:13:33.414 Endurance Groups: Not Supported 00:13:33.414 Predictable Latency Mode: Not Supported 00:13:33.414 Traffic Based Keep ALive: Not Supported 00:13:33.414 Namespace Granularity: Not Supported 00:13:33.414 SQ Associations: Not Supported 00:13:33.414 UUID List: Not Supported 00:13:33.414 Multi-Domain Subsystem: Not Supported 00:13:33.414 Fixed Capacity Management: Not Supported 00:13:33.414 Variable Capacity Management: Not Supported 00:13:33.414 Delete Endurance Group: Not Supported 00:13:33.414 Delete NVM Set: Not Supported 00:13:33.414 Extended LBA Formats Supported: Not Supported 00:13:33.414 Flexible Data Placement Supported: Not Supported 00:13:33.414 00:13:33.414 Controller Memory Buffer Support 00:13:33.414 ================================ 00:13:33.414 Supported: No 00:13:33.414 00:13:33.414 Persistent Memory Region Support 00:13:33.414 ================================ 00:13:33.414 Supported: No 00:13:33.414 00:13:33.414 Admin Command Set Attributes 00:13:33.414 ============================ 00:13:33.414 Security Send/Receive: Not Supported 00:13:33.414 Format NVM: Not Supported 00:13:33.414 Firmware Activate/Download: Not Supported 00:13:33.414 Namespace Management: Not Supported 00:13:33.414 Device Self-Test: Not Supported 00:13:33.414 Directives: Not Supported 00:13:33.414 NVMe-MI: Not Supported 00:13:33.414 Virtualization Management: Not Supported 00:13:33.414 Doorbell Buffer Config: Not Supported 00:13:33.414 Get LBA Status Capability: Not Supported 00:13:33.414 Command & Feature Lockdown Capability: Not Supported 00:13:33.414 Abort Command Limit: 4 00:13:33.414 Async Event Request Limit: 4 00:13:33.414 Number of Firmware Slots: N/A 00:13:33.414 Firmware Slot 1 Read-Only: N/A 00:13:33.414 Firmware Activation Without Reset: N/A 00:13:33.414 Multiple Update Detection Support: N/A 00:13:33.414 Firmware Update Granularity: No Information Provided 00:13:33.414 Per-Namespace SMART Log: No 00:13:33.414 Asymmetric Namespace Access Log Page: Not Supported 00:13:33.414 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:33.414 Command Effects Log Page: Supported 00:13:33.414 Get Log Page Extended Data: Supported 00:13:33.414 Telemetry Log Pages: Not Supported 00:13:33.414 Persistent Event Log Pages: Not Supported 00:13:33.414 Supported Log Pages Log Page: May Support 00:13:33.414 Commands Supported & Effects Log Page: Not Supported 00:13:33.414 Feature Identifiers & Effects Log Page:May Support 00:13:33.414 NVMe-MI Commands & Effects Log Page: May Support 00:13:33.414 Data Area 4 for Telemetry Log: Not Supported 00:13:33.414 Error Log Page Entries Supported: 128 00:13:33.414 Keep Alive: Supported 00:13:33.414 Keep Alive Granularity: 10000 ms 00:13:33.414 00:13:33.414 NVM Command Set Attributes 00:13:33.414 ========================== 00:13:33.414 Submission Queue Entry Size 00:13:33.414 Max: 64 00:13:33.414 Min: 64 00:13:33.414 Completion Queue Entry Size 00:13:33.414 Max: 16 00:13:33.414 Min: 16 00:13:33.414 Number of Namespaces: 32 00:13:33.414 Compare Command: Supported 00:13:33.414 Write Uncorrectable Command: Not Supported 00:13:33.414 Dataset Management Command: Supported 00:13:33.414 Write Zeroes Command: Supported 00:13:33.414 Set Features Save Field: Not Supported 00:13:33.414 Reservations: Not Supported 00:13:33.414 Timestamp: Not Supported 00:13:33.414 Copy: Supported 00:13:33.414 Volatile Write Cache: Present 00:13:33.414 Atomic Write Unit (Normal): 1 00:13:33.414 Atomic Write Unit (PFail): 1 00:13:33.414 Atomic Compare & Write Unit: 1 00:13:33.414 Fused Compare & Write: Supported 00:13:33.414 Scatter-Gather List 00:13:33.414 SGL Command Set: Supported (Dword aligned) 00:13:33.414 SGL Keyed: Not Supported 00:13:33.414 SGL Bit Bucket Descriptor: Not Supported 00:13:33.414 SGL Metadata Pointer: Not Supported 00:13:33.414 Oversized SGL: Not Supported 00:13:33.415 SGL Metadata Address: Not Supported 00:13:33.415 SGL Offset: Not Supported 00:13:33.415 Transport SGL Data Block: Not Supported 00:13:33.415 Replay Protected Memory Block: Not Supported 00:13:33.415 00:13:33.415 Firmware Slot Information 00:13:33.415 ========================= 00:13:33.415 Active slot: 1 00:13:33.415 Slot 1 Firmware Revision: 25.01 00:13:33.415 00:13:33.415 00:13:33.415 Commands Supported and Effects 00:13:33.415 ============================== 00:13:33.415 Admin Commands 00:13:33.415 -------------- 00:13:33.415 Get Log Page (02h): Supported 00:13:33.415 Identify (06h): Supported 00:13:33.415 Abort (08h): Supported 00:13:33.415 Set Features (09h): Supported 00:13:33.415 Get Features (0Ah): Supported 00:13:33.415 Asynchronous Event Request (0Ch): Supported 00:13:33.415 Keep Alive (18h): Supported 00:13:33.415 I/O Commands 00:13:33.415 ------------ 00:13:33.415 Flush (00h): Supported LBA-Change 00:13:33.415 Write (01h): Supported LBA-Change 00:13:33.415 Read (02h): Supported 00:13:33.415 Compare (05h): Supported 00:13:33.415 Write Zeroes (08h): Supported LBA-Change 00:13:33.415 Dataset Management (09h): Supported LBA-Change 00:13:33.415 Copy (19h): Supported LBA-Change 00:13:33.415 00:13:33.415 Error Log 00:13:33.415 ========= 00:13:33.415 00:13:33.415 Arbitration 00:13:33.415 =========== 00:13:33.415 Arbitration Burst: 1 00:13:33.415 00:13:33.415 Power Management 00:13:33.415 ================ 00:13:33.415 Number of Power States: 1 00:13:33.415 Current Power State: Power State #0 00:13:33.415 Power State #0: 00:13:33.415 Max Power: 0.00 W 00:13:33.415 Non-Operational State: Operational 00:13:33.415 Entry Latency: Not Reported 00:13:33.415 Exit Latency: Not Reported 00:13:33.415 Relative Read Throughput: 0 00:13:33.415 Relative Read Latency: 0 00:13:33.415 Relative Write Throughput: 0 00:13:33.415 Relative Write Latency: 0 00:13:33.415 Idle Power: Not Reported 00:13:33.415 Active Power: Not Reported 00:13:33.415 Non-Operational Permissive Mode: Not Supported 00:13:33.415 00:13:33.415 Health Information 00:13:33.415 ================== 00:13:33.415 Critical Warnings: 00:13:33.415 Available Spare Space: OK 00:13:33.415 Temperature: OK 00:13:33.415 Device Reliability: OK 00:13:33.415 Read Only: No 00:13:33.415 Volatile Memory Backup: OK 00:13:33.415 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:33.415 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:33.415 Available Spare: 0% 00:13:33.415 Available Sp[2024-10-08 18:20:51.665494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:33.415 [2024-10-08 18:20:51.673313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:33.415 [2024-10-08 18:20:51.673361] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:33.415 [2024-10-08 18:20:51.673379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.415 [2024-10-08 18:20:51.673390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.415 [2024-10-08 18:20:51.673399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.415 [2024-10-08 18:20:51.673409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.415 [2024-10-08 18:20:51.673476] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:33.415 [2024-10-08 18:20:51.673497] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:33.415 [2024-10-08 18:20:51.674475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:33.415 [2024-10-08 18:20:51.674564] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:33.415 [2024-10-08 18:20:51.674579] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:33.415 [2024-10-08 18:20:51.675489] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:33.415 [2024-10-08 18:20:51.675513] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:33.415 [2024-10-08 18:20:51.675574] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:33.415 [2024-10-08 18:20:51.676765] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:33.673 are Threshold: 0% 00:13:33.673 Life Percentage Used: 0% 00:13:33.673 Data Units Read: 0 00:13:33.673 Data Units Written: 0 00:13:33.673 Host Read Commands: 0 00:13:33.673 Host Write Commands: 0 00:13:33.673 Controller Busy Time: 0 minutes 00:13:33.673 Power Cycles: 0 00:13:33.673 Power On Hours: 0 hours 00:13:33.673 Unsafe Shutdowns: 0 00:13:33.673 Unrecoverable Media Errors: 0 00:13:33.673 Lifetime Error Log Entries: 0 00:13:33.673 Warning Temperature Time: 0 minutes 00:13:33.673 Critical Temperature Time: 0 minutes 00:13:33.673 00:13:33.673 Number of Queues 00:13:33.673 ================ 00:13:33.673 Number of I/O Submission Queues: 127 00:13:33.673 Number of I/O Completion Queues: 127 00:13:33.673 00:13:33.673 Active Namespaces 00:13:33.673 ================= 00:13:33.673 Namespace ID:1 00:13:33.673 Error Recovery Timeout: Unlimited 00:13:33.673 Command Set Identifier: NVM (00h) 00:13:33.673 Deallocate: Supported 00:13:33.673 Deallocated/Unwritten Error: Not Supported 00:13:33.673 Deallocated Read Value: Unknown 00:13:33.673 Deallocate in Write Zeroes: Not Supported 00:13:33.673 Deallocated Guard Field: 0xFFFF 00:13:33.673 Flush: Supported 00:13:33.673 Reservation: Supported 00:13:33.673 Namespace Sharing Capabilities: Multiple Controllers 00:13:33.673 Size (in LBAs): 131072 (0GiB) 00:13:33.673 Capacity (in LBAs): 131072 (0GiB) 00:13:33.673 Utilization (in LBAs): 131072 (0GiB) 00:13:33.673 NGUID: 8E9E39DC8C2B467B91ACFDD4DAF72108 00:13:33.673 UUID: 8e9e39dc-8c2b-467b-91ac-fdd4daf72108 00:13:33.673 Thin Provisioning: Not Supported 00:13:33.673 Per-NS Atomic Units: Yes 00:13:33.673 Atomic Boundary Size (Normal): 0 00:13:33.673 Atomic Boundary Size (PFail): 0 00:13:33.673 Atomic Boundary Offset: 0 00:13:33.673 Maximum Single Source Range Length: 65535 00:13:33.673 Maximum Copy Length: 65535 00:13:33.673 Maximum Source Range Count: 1 00:13:33.673 NGUID/EUI64 Never Reused: No 00:13:33.673 Namespace Write Protected: No 00:13:33.673 Number of LBA Formats: 1 00:13:33.673 Current LBA Format: LBA Format #00 00:13:33.673 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:33.673 00:13:33.673 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:33.673 [2024-10-08 18:20:51.918104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:38.937 Initializing NVMe Controllers 00:13:38.937 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:38.937 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:38.937 Initialization complete. Launching workers. 00:13:38.937 ======================================================== 00:13:38.937 Latency(us) 00:13:38.937 Device Information : IOPS MiB/s Average min max 00:13:38.937 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32337.14 126.32 3957.74 1190.72 8287.65 00:13:38.937 ======================================================== 00:13:38.937 Total : 32337.14 126.32 3957.74 1190.72 8287.65 00:13:38.937 00:13:38.937 [2024-10-08 18:20:57.023662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:38.937 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:39.195 [2024-10-08 18:20:57.270328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:44.553 Initializing NVMe Controllers 00:13:44.553 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:44.553 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:44.553 Initialization complete. Launching workers. 00:13:44.553 ======================================================== 00:13:44.553 Latency(us) 00:13:44.553 Device Information : IOPS MiB/s Average min max 00:13:44.553 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29929.95 116.91 4276.20 1232.19 9969.93 00:13:44.553 ======================================================== 00:13:44.553 Total : 29929.95 116.91 4276.20 1232.19 9969.93 00:13:44.553 00:13:44.553 [2024-10-08 18:21:02.290441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:44.553 18:21:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:44.553 [2024-10-08 18:21:02.514345] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:49.818 [2024-10-08 18:21:07.637447] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:49.818 Initializing NVMe Controllers 00:13:49.818 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:49.818 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:49.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:49.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:49.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:49.818 Initialization complete. Launching workers. 00:13:49.818 Starting thread on core 2 00:13:49.818 Starting thread on core 3 00:13:49.818 Starting thread on core 1 00:13:49.818 18:21:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:49.818 [2024-10-08 18:21:07.948730] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:53.101 [2024-10-08 18:21:11.107570] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:53.101 Initializing NVMe Controllers 00:13:53.101 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:53.101 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:53.101 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:53.101 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:53.101 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:53.101 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:53.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:53.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:53.101 Initialization complete. Launching workers. 00:13:53.101 Starting thread on core 1 with urgent priority queue 00:13:53.101 Starting thread on core 2 with urgent priority queue 00:13:53.101 Starting thread on core 3 with urgent priority queue 00:13:53.101 Starting thread on core 0 with urgent priority queue 00:13:53.101 SPDK bdev Controller (SPDK2 ) core 0: 3223.33 IO/s 31.02 secs/100000 ios 00:13:53.101 SPDK bdev Controller (SPDK2 ) core 1: 2559.33 IO/s 39.07 secs/100000 ios 00:13:53.101 SPDK bdev Controller (SPDK2 ) core 2: 2674.00 IO/s 37.40 secs/100000 ios 00:13:53.101 SPDK bdev Controller (SPDK2 ) core 3: 2988.00 IO/s 33.47 secs/100000 ios 00:13:53.101 ======================================================== 00:13:53.101 00:13:53.101 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:53.359 [2024-10-08 18:21:11.411822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:53.359 Initializing NVMe Controllers 00:13:53.359 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:53.359 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:53.359 Namespace ID: 1 size: 0GB 00:13:53.359 Initialization complete. 00:13:53.359 INFO: using host memory buffer for IO 00:13:53.359 Hello world! 00:13:53.359 [2024-10-08 18:21:11.420957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:53.359 18:21:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:53.617 [2024-10-08 18:21:11.721662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.551 Initializing NVMe Controllers 00:13:54.551 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.551 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.551 Initialization complete. Launching workers. 00:13:54.551 submit (in ns) avg, min, max = 7469.7, 3547.8, 4016193.3 00:13:54.551 complete (in ns) avg, min, max = 27988.8, 2073.3, 4017402.2 00:13:54.551 00:13:54.551 Submit histogram 00:13:54.551 ================ 00:13:54.551 Range in us Cumulative Count 00:13:54.551 3.532 - 3.556: 0.0393% ( 5) 00:13:54.551 3.556 - 3.579: 1.2350% ( 152) 00:13:54.552 3.579 - 3.603: 5.3017% ( 517) 00:13:54.552 3.603 - 3.627: 14.1745% ( 1128) 00:13:54.552 3.627 - 3.650: 25.0924% ( 1388) 00:13:54.552 3.650 - 3.674: 35.2395% ( 1290) 00:13:54.552 3.674 - 3.698: 42.6099% ( 937) 00:13:54.552 3.698 - 3.721: 48.8240% ( 790) 00:13:54.552 3.721 - 3.745: 53.6616% ( 615) 00:13:54.552 3.745 - 3.769: 58.2239% ( 580) 00:13:54.552 3.769 - 3.793: 61.8265% ( 458) 00:13:54.552 3.793 - 3.816: 64.9257% ( 394) 00:13:54.552 3.816 - 3.840: 67.9305% ( 382) 00:13:54.552 3.840 - 3.864: 72.5635% ( 589) 00:13:54.552 3.864 - 3.887: 77.4247% ( 618) 00:13:54.552 3.887 - 3.911: 81.6566% ( 538) 00:13:54.552 3.911 - 3.935: 84.7322% ( 391) 00:13:54.552 3.935 - 3.959: 86.6829% ( 248) 00:13:54.552 3.959 - 3.982: 88.2482% ( 199) 00:13:54.552 3.982 - 4.006: 89.7585% ( 192) 00:13:54.552 4.006 - 4.030: 90.8519% ( 139) 00:13:54.552 4.030 - 4.053: 91.7329% ( 112) 00:13:54.552 4.053 - 4.077: 92.5667% ( 106) 00:13:54.552 4.077 - 4.101: 93.4241% ( 109) 00:13:54.552 4.101 - 4.124: 94.1556% ( 93) 00:13:54.552 4.124 - 4.148: 94.7455% ( 75) 00:13:54.552 4.148 - 4.172: 95.1467% ( 51) 00:13:54.552 4.172 - 4.196: 95.4299% ( 36) 00:13:54.552 4.196 - 4.219: 95.6973% ( 34) 00:13:54.552 4.219 - 4.243: 95.9254% ( 29) 00:13:54.552 4.243 - 4.267: 96.0513% ( 16) 00:13:54.552 4.267 - 4.290: 96.2007% ( 19) 00:13:54.552 4.290 - 4.314: 96.3738% ( 22) 00:13:54.552 4.314 - 4.338: 96.4918% ( 15) 00:13:54.552 4.338 - 4.361: 96.6176% ( 16) 00:13:54.552 4.361 - 4.385: 96.7199% ( 13) 00:13:54.552 4.385 - 4.409: 96.7828% ( 8) 00:13:54.552 4.409 - 4.433: 96.8693% ( 11) 00:13:54.552 4.433 - 4.456: 96.9008% ( 4) 00:13:54.552 4.456 - 4.480: 96.9559% ( 7) 00:13:54.552 4.480 - 4.504: 96.9637% ( 1) 00:13:54.552 4.504 - 4.527: 96.9716% ( 1) 00:13:54.552 4.527 - 4.551: 96.9873% ( 2) 00:13:54.552 4.575 - 4.599: 96.9952% ( 1) 00:13:54.552 4.622 - 4.646: 97.0031% ( 1) 00:13:54.552 4.646 - 4.670: 97.0109% ( 1) 00:13:54.552 4.693 - 4.717: 97.0267% ( 2) 00:13:54.552 4.717 - 4.741: 97.0345% ( 1) 00:13:54.552 4.741 - 4.764: 97.0424% ( 1) 00:13:54.552 4.764 - 4.788: 97.0581% ( 2) 00:13:54.552 4.788 - 4.812: 97.0739% ( 2) 00:13:54.552 4.812 - 4.836: 97.1132% ( 5) 00:13:54.552 4.836 - 4.859: 97.1368% ( 3) 00:13:54.552 4.859 - 4.883: 97.1840% ( 6) 00:13:54.552 4.883 - 4.907: 97.2626% ( 10) 00:13:54.552 4.907 - 4.930: 97.3098% ( 6) 00:13:54.552 4.930 - 4.954: 97.3413% ( 4) 00:13:54.552 4.954 - 4.978: 97.4200% ( 10) 00:13:54.552 4.978 - 5.001: 97.4593% ( 5) 00:13:54.552 5.001 - 5.025: 97.4829% ( 3) 00:13:54.552 5.025 - 5.049: 97.5301% ( 6) 00:13:54.552 5.049 - 5.073: 97.5458% ( 2) 00:13:54.552 5.073 - 5.096: 97.5694% ( 3) 00:13:54.552 5.096 - 5.120: 97.6087% ( 5) 00:13:54.552 5.120 - 5.144: 97.6481% ( 5) 00:13:54.552 5.144 - 5.167: 97.6953% ( 6) 00:13:54.552 5.167 - 5.191: 97.7189% ( 3) 00:13:54.552 5.191 - 5.215: 97.7346% ( 2) 00:13:54.552 5.215 - 5.239: 97.7425% ( 1) 00:13:54.552 5.239 - 5.262: 97.7739% ( 4) 00:13:54.552 5.262 - 5.286: 97.7897% ( 2) 00:13:54.552 5.286 - 5.310: 97.8211% ( 4) 00:13:54.552 5.310 - 5.333: 97.8369% ( 2) 00:13:54.552 5.333 - 5.357: 97.8526% ( 2) 00:13:54.552 5.357 - 5.381: 97.8605% ( 1) 00:13:54.552 5.404 - 5.428: 97.8683% ( 1) 00:13:54.552 5.452 - 5.476: 97.8762% ( 1) 00:13:54.552 5.499 - 5.523: 97.8841% ( 1) 00:13:54.552 5.547 - 5.570: 97.8919% ( 1) 00:13:54.552 5.570 - 5.594: 97.8998% ( 1) 00:13:54.552 5.594 - 5.618: 97.9155% ( 2) 00:13:54.552 5.713 - 5.736: 97.9234% ( 1) 00:13:54.552 5.831 - 5.855: 97.9313% ( 1) 00:13:54.552 5.902 - 5.926: 97.9391% ( 1) 00:13:54.552 5.926 - 5.950: 97.9548% ( 2) 00:13:54.552 5.950 - 5.973: 97.9627% ( 1) 00:13:54.552 5.973 - 5.997: 97.9706% ( 1) 00:13:54.552 6.021 - 6.044: 97.9784% ( 1) 00:13:54.552 6.044 - 6.068: 97.9863% ( 1) 00:13:54.552 6.068 - 6.116: 98.0099% ( 3) 00:13:54.552 6.116 - 6.163: 98.0178% ( 1) 00:13:54.552 6.163 - 6.210: 98.0335% ( 2) 00:13:54.552 6.210 - 6.258: 98.0492% ( 2) 00:13:54.552 6.495 - 6.542: 98.0650% ( 2) 00:13:54.552 7.064 - 7.111: 98.0728% ( 1) 00:13:54.552 7.206 - 7.253: 98.0807% ( 1) 00:13:54.552 7.348 - 7.396: 98.0964% ( 2) 00:13:54.552 7.490 - 7.538: 98.1043% ( 1) 00:13:54.552 7.870 - 7.917: 98.1122% ( 1) 00:13:54.552 8.154 - 8.201: 98.1200% ( 1) 00:13:54.552 8.201 - 8.249: 98.1279% ( 1) 00:13:54.552 8.344 - 8.391: 98.1358% ( 1) 00:13:54.552 8.486 - 8.533: 98.1594% ( 3) 00:13:54.552 8.533 - 8.581: 98.1751% ( 2) 00:13:54.552 8.581 - 8.628: 98.1830% ( 1) 00:13:54.552 8.723 - 8.770: 98.1987% ( 2) 00:13:54.552 8.770 - 8.818: 98.2066% ( 1) 00:13:54.552 8.960 - 9.007: 98.2144% ( 1) 00:13:54.552 9.007 - 9.055: 98.2302% ( 2) 00:13:54.552 9.150 - 9.197: 98.2380% ( 1) 00:13:54.552 9.197 - 9.244: 98.2538% ( 2) 00:13:54.552 9.244 - 9.292: 98.2695% ( 2) 00:13:54.552 9.292 - 9.339: 98.2852% ( 2) 00:13:54.552 9.339 - 9.387: 98.3010% ( 2) 00:13:54.552 9.387 - 9.434: 98.3167% ( 2) 00:13:54.552 9.529 - 9.576: 98.3324% ( 2) 00:13:54.552 9.576 - 9.624: 98.3481% ( 2) 00:13:54.552 9.624 - 9.671: 98.3717% ( 3) 00:13:54.552 9.671 - 9.719: 98.3796% ( 1) 00:13:54.552 9.766 - 9.813: 98.3875% ( 1) 00:13:54.552 9.813 - 9.861: 98.4111% ( 3) 00:13:54.552 9.956 - 10.003: 98.4189% ( 1) 00:13:54.552 10.003 - 10.050: 98.4347% ( 2) 00:13:54.552 10.240 - 10.287: 98.4504% ( 2) 00:13:54.552 10.335 - 10.382: 98.4583% ( 1) 00:13:54.552 10.430 - 10.477: 98.4661% ( 1) 00:13:54.552 10.477 - 10.524: 98.4740% ( 1) 00:13:54.552 10.619 - 10.667: 98.4897% ( 2) 00:13:54.552 10.761 - 10.809: 98.4976% ( 1) 00:13:54.552 10.951 - 10.999: 98.5055% ( 1) 00:13:54.552 10.999 - 11.046: 98.5133% ( 1) 00:13:54.552 11.283 - 11.330: 98.5291% ( 2) 00:13:54.552 11.330 - 11.378: 98.5448% ( 2) 00:13:54.552 11.425 - 11.473: 98.5527% ( 1) 00:13:54.552 11.473 - 11.520: 98.5605% ( 1) 00:13:54.552 11.757 - 11.804: 98.5684% ( 1) 00:13:54.552 11.899 - 11.947: 98.5763% ( 1) 00:13:54.552 11.947 - 11.994: 98.5841% ( 1) 00:13:54.552 12.089 - 12.136: 98.5920% ( 1) 00:13:54.552 12.231 - 12.326: 98.5999% ( 1) 00:13:54.552 12.421 - 12.516: 98.6156% ( 2) 00:13:54.552 12.516 - 12.610: 98.6235% ( 1) 00:13:54.552 12.705 - 12.800: 98.6313% ( 1) 00:13:54.552 12.800 - 12.895: 98.6392% ( 1) 00:13:54.552 12.895 - 12.990: 98.6471% ( 1) 00:13:54.552 13.084 - 13.179: 98.6628% ( 2) 00:13:54.552 13.179 - 13.274: 98.6785% ( 2) 00:13:54.552 13.274 - 13.369: 98.6942% ( 2) 00:13:54.552 13.369 - 13.464: 98.7021% ( 1) 00:13:54.552 13.559 - 13.653: 98.7100% ( 1) 00:13:54.552 13.843 - 13.938: 98.7178% ( 1) 00:13:54.552 14.033 - 14.127: 98.7257% ( 1) 00:13:54.552 14.412 - 14.507: 98.7414% ( 2) 00:13:54.552 14.507 - 14.601: 98.7493% ( 1) 00:13:54.552 14.791 - 14.886: 98.7572% ( 1) 00:13:54.552 14.981 - 15.076: 98.7650% ( 1) 00:13:54.552 15.170 - 15.265: 98.7729% ( 1) 00:13:54.552 15.265 - 15.360: 98.7808% ( 1) 00:13:54.552 16.972 - 17.067: 98.7965% ( 2) 00:13:54.552 17.161 - 17.256: 98.8122% ( 2) 00:13:54.552 17.256 - 17.351: 98.8280% ( 2) 00:13:54.552 17.351 - 17.446: 98.8752% ( 6) 00:13:54.552 17.446 - 17.541: 98.8988% ( 3) 00:13:54.552 17.541 - 17.636: 98.9145% ( 2) 00:13:54.552 17.636 - 17.730: 98.9617% ( 6) 00:13:54.552 17.730 - 17.825: 99.0010% ( 5) 00:13:54.552 17.825 - 17.920: 99.0640% ( 8) 00:13:54.552 17.920 - 18.015: 99.1269% ( 8) 00:13:54.552 18.015 - 18.110: 99.2213% ( 12) 00:13:54.552 18.110 - 18.204: 99.2842% ( 8) 00:13:54.552 18.204 - 18.299: 99.3865% ( 13) 00:13:54.552 18.299 - 18.394: 99.4572% ( 9) 00:13:54.552 18.394 - 18.489: 99.5752% ( 15) 00:13:54.552 18.489 - 18.584: 99.6146% ( 5) 00:13:54.552 18.584 - 18.679: 99.6382% ( 3) 00:13:54.552 18.679 - 18.773: 99.6460% ( 1) 00:13:54.552 18.773 - 18.868: 99.6696% ( 3) 00:13:54.552 18.868 - 18.963: 99.6932% ( 3) 00:13:54.552 18.963 - 19.058: 99.7404% ( 6) 00:13:54.552 19.058 - 19.153: 99.7483% ( 1) 00:13:54.552 19.153 - 19.247: 99.7640% ( 2) 00:13:54.552 19.342 - 19.437: 99.7798% ( 2) 00:13:54.552 19.437 - 19.532: 99.7876% ( 1) 00:13:54.552 19.911 - 20.006: 99.7955% ( 1) 00:13:54.552 20.101 - 20.196: 99.8034% ( 1) 00:13:54.552 20.196 - 20.290: 99.8112% ( 1) 00:13:54.552 20.764 - 20.859: 99.8269% ( 2) 00:13:54.552 20.859 - 20.954: 99.8348% ( 1) 00:13:54.552 21.428 - 21.523: 99.8427% ( 1) 00:13:54.552 21.618 - 21.713: 99.8505% ( 1) 00:13:54.552 21.902 - 21.997: 99.8584% ( 1) 00:13:54.552 22.566 - 22.661: 99.8663% ( 1) 00:13:54.552 23.040 - 23.135: 99.8741% ( 1) 00:13:54.552 23.609 - 23.704: 99.8820% ( 1) 00:13:54.552 27.686 - 27.876: 99.8899% ( 1) 00:13:54.552 29.393 - 29.582: 99.8977% ( 1) 00:13:54.552 29.582 - 29.772: 99.9056% ( 1) 00:13:54.553 77.748 - 78.127: 99.9135% ( 1) 00:13:54.553 3980.705 - 4004.978: 99.9685% ( 7) 00:13:54.553 4004.978 - 4029.250: 100.0000% ( 4) 00:13:54.553 00:13:54.553 Complete histogram 00:13:54.553 ================== 00:13:54.553 Range in us Cumulative Count 00:13:54.553 2.062 - 2.074: 0.0079% ( 1) 00:13:54.553 2.074 - 2.086: 2.9812% ( 378) 00:13:54.553 2.086 - 2.098: 18.7053% ( 1999) 00:13:54.553 2.098 - 2.110: 31.2043% ( 1589) 00:13:54.553 2.110 - 2.121: 41.2648% ( 1279) 00:13:54.553 2.121 - 2.133: 51.6243% ( 1317) 00:13:54.553 2.133 - 2.145: 58.0665% ( 819) 00:13:54.553 2.145 - 2.157: 62.7940% ( 601) 00:13:54.553 2.157 - 2.169: 68.2058% ( 688) 00:13:54.553 2.169 - 2.181: 71.4937% ( 418) 00:13:54.553 2.181 - 2.193: 74.8997% ( 433) 00:13:54.553 2.193 - 2.204: 78.7698% ( 492) 00:13:54.553 2.204 - 2.216: 82.1128% ( 425) 00:13:54.553 2.216 - 2.228: 84.9445% ( 360) 00:13:54.553 2.228 - 2.240: 87.2178% ( 289) 00:13:54.553 2.240 - 2.252: 88.7753% ( 198) 00:13:54.553 2.252 - 2.264: 90.1439% ( 174) 00:13:54.553 2.264 - 2.276: 91.8902% ( 222) 00:13:54.553 2.276 - 2.287: 93.3847% ( 190) 00:13:54.553 2.287 - 2.299: 94.4624% ( 137) 00:13:54.553 2.299 - 2.311: 94.9501% ( 62) 00:13:54.553 2.311 - 2.323: 95.2096% ( 33) 00:13:54.553 2.323 - 2.335: 95.3669% ( 20) 00:13:54.553 2.335 - 2.347: 95.4928% ( 16) 00:13:54.553 2.347 - 2.359: 95.6501% ( 20) 00:13:54.553 2.359 - 2.370: 95.9805% ( 42) 00:13:54.553 2.370 - 2.382: 96.3738% ( 50) 00:13:54.553 2.382 - 2.394: 96.6806% ( 39) 00:13:54.553 2.394 - 2.406: 96.9952% ( 40) 00:13:54.553 2.406 - 2.418: 97.2233% ( 29) 00:13:54.553 2.418 - 2.430: 97.3964% ( 22) 00:13:54.553 2.430 - 2.441: 97.5458% ( 19) 00:13:54.553 2.441 - 2.453: 97.6953% ( 19) 00:13:54.553 2.453 - 2.465: 97.8683% ( 22) 00:13:54.553 2.465 - 2.477: 97.9706% ( 13) 00:13:54.553 2.477 - 2.489: 98.0964% ( 16) 00:13:54.553 2.489 - 2.501: 98.1672% ( 9) 00:13:54.553 2.501 - 2.513: 98.2459% ( 10) 00:13:54.553 2.513 - 2.524: 98.2616% ( 2) 00:13:54.553 2.524 - 2.536: 98.3010% ( 5) 00:13:54.553 2.536 - 2.548: 98.3167% ( 2) 00:13:54.553 2.548 - 2.560: 98.3481% ( 4) 00:13:54.553 2.560 - 2.572: 98.3560% ( 1) 00:13:54.553 2.572 - 2.584: 98.3639% ( 1) 00:13:54.553 2.584 - 2.596: 98.3953% ( 4) 00:13:54.553 2.596 - 2.607: 98.4032% ( 1) 00:13:54.553 2.607 - 2.619: 98.4189% ( 2) 00:13:54.553 2.631 - 2.643: 98.4347% ( 2) 00:13:54.553 2.726 - 2.738: 98.4583% ( 3) 00:13:54.553 2.738 - 2.750: 98.4661% ( 1) 00:13:54.553 2.809 - 2.821: 98.4740% ( 1) 00:13:54.553 2.833 - 2.844: 9[2024-10-08 18:21:12.816211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.811 8.4819% ( 1) 00:13:54.811 3.437 - 3.461: 98.4897% ( 1) 00:13:54.811 3.461 - 3.484: 98.4976% ( 1) 00:13:54.811 3.508 - 3.532: 98.5133% ( 2) 00:13:54.811 3.532 - 3.556: 98.5212% ( 1) 00:13:54.811 3.579 - 3.603: 98.5369% ( 2) 00:13:54.811 3.603 - 3.627: 98.5527% ( 2) 00:13:54.811 3.674 - 3.698: 98.5684% ( 2) 00:13:54.811 3.698 - 3.721: 98.5763% ( 1) 00:13:54.811 3.721 - 3.745: 98.5841% ( 1) 00:13:54.811 3.745 - 3.769: 98.5920% ( 1) 00:13:54.811 3.769 - 3.793: 98.5999% ( 1) 00:13:54.811 3.793 - 3.816: 98.6077% ( 1) 00:13:54.811 3.840 - 3.864: 98.6235% ( 2) 00:13:54.811 3.864 - 3.887: 98.6313% ( 1) 00:13:54.811 3.887 - 3.911: 98.6392% ( 1) 00:13:54.811 3.911 - 3.935: 98.6471% ( 1) 00:13:54.811 3.982 - 4.006: 98.6628% ( 2) 00:13:54.811 4.030 - 4.053: 98.6707% ( 1) 00:13:54.811 4.124 - 4.148: 98.6785% ( 1) 00:13:54.811 4.172 - 4.196: 98.6864% ( 1) 00:13:54.811 5.570 - 5.594: 98.6942% ( 1) 00:13:54.811 5.641 - 5.665: 98.7021% ( 1) 00:13:54.811 5.760 - 5.784: 98.7100% ( 1) 00:13:54.811 6.921 - 6.969: 98.7178% ( 1) 00:13:54.811 6.969 - 7.016: 98.7257% ( 1) 00:13:54.811 7.016 - 7.064: 98.7336% ( 1) 00:13:54.811 7.159 - 7.206: 98.7414% ( 1) 00:13:54.811 7.301 - 7.348: 98.7493% ( 1) 00:13:54.811 7.490 - 7.538: 98.7572% ( 1) 00:13:54.811 7.585 - 7.633: 98.7650% ( 1) 00:13:54.811 7.633 - 7.680: 98.7729% ( 1) 00:13:54.811 7.775 - 7.822: 98.7886% ( 2) 00:13:54.811 8.154 - 8.201: 98.7965% ( 1) 00:13:54.811 8.201 - 8.249: 98.8044% ( 1) 00:13:54.811 8.770 - 8.818: 98.8122% ( 1) 00:13:54.811 9.197 - 9.244: 98.8201% ( 1) 00:13:54.811 10.524 - 10.572: 98.8280% ( 1) 00:13:54.811 13.748 - 13.843: 98.8358% ( 1) 00:13:54.811 15.550 - 15.644: 98.8594% ( 3) 00:13:54.811 15.834 - 15.929: 98.8752% ( 2) 00:13:54.811 15.929 - 16.024: 98.9381% ( 8) 00:13:54.811 16.024 - 16.119: 98.9617% ( 3) 00:13:54.811 16.119 - 16.213: 98.9853% ( 3) 00:13:54.811 16.213 - 16.308: 99.0089% ( 3) 00:13:54.811 16.403 - 16.498: 99.0482% ( 5) 00:13:54.811 16.498 - 16.593: 99.0797% ( 4) 00:13:54.811 16.593 - 16.687: 99.1033% ( 3) 00:13:54.811 16.687 - 16.782: 99.1190% ( 2) 00:13:54.811 16.782 - 16.877: 99.1741% ( 7) 00:13:54.811 16.877 - 16.972: 99.2134% ( 5) 00:13:54.811 16.972 - 17.067: 99.2370% ( 3) 00:13:54.811 17.067 - 17.161: 99.2449% ( 1) 00:13:54.811 17.161 - 17.256: 99.2606% ( 2) 00:13:54.811 17.256 - 17.351: 99.2763% ( 2) 00:13:54.811 17.351 - 17.446: 99.2921% ( 2) 00:13:54.811 17.446 - 17.541: 99.3078% ( 2) 00:13:54.811 17.730 - 17.825: 99.3157% ( 1) 00:13:54.811 18.110 - 18.204: 99.3235% ( 1) 00:13:54.811 18.394 - 18.489: 99.3314% ( 1) 00:13:54.811 20.575 - 20.670: 99.3393% ( 1) 00:13:54.811 26.548 - 26.738: 99.3471% ( 1) 00:13:54.811 29.582 - 29.772: 99.3550% ( 1) 00:13:54.811 3034.074 - 3046.210: 99.3629% ( 1) 00:13:54.811 3980.705 - 4004.978: 99.7876% ( 54) 00:13:54.811 4004.978 - 4029.250: 100.0000% ( 27) 00:13:54.811 00:13:54.811 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:54.811 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:54.811 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:54.812 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:54.812 18:21:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:55.070 [ 00:13:55.070 { 00:13:55.070 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:55.070 "subtype": "Discovery", 00:13:55.070 "listen_addresses": [], 00:13:55.070 "allow_any_host": true, 00:13:55.070 "hosts": [] 00:13:55.070 }, 00:13:55.070 { 00:13:55.070 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:55.070 "subtype": "NVMe", 00:13:55.070 "listen_addresses": [ 00:13:55.070 { 00:13:55.070 "trtype": "VFIOUSER", 00:13:55.070 "adrfam": "IPv4", 00:13:55.070 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:55.070 "trsvcid": "0" 00:13:55.070 } 00:13:55.070 ], 00:13:55.070 "allow_any_host": true, 00:13:55.070 "hosts": [], 00:13:55.070 "serial_number": "SPDK1", 00:13:55.070 "model_number": "SPDK bdev Controller", 00:13:55.070 "max_namespaces": 32, 00:13:55.070 "min_cntlid": 1, 00:13:55.070 "max_cntlid": 65519, 00:13:55.070 "namespaces": [ 00:13:55.070 { 00:13:55.070 "nsid": 1, 00:13:55.070 "bdev_name": "Malloc1", 00:13:55.070 "name": "Malloc1", 00:13:55.070 "nguid": "4F73AD7C9778419DB35820F28C963A88", 00:13:55.070 "uuid": "4f73ad7c-9778-419d-b358-20f28c963a88" 00:13:55.070 }, 00:13:55.070 { 00:13:55.070 "nsid": 2, 00:13:55.070 "bdev_name": "Malloc3", 00:13:55.070 "name": "Malloc3", 00:13:55.070 "nguid": "CE91289229044719A15716B323C91530", 00:13:55.070 "uuid": "ce912892-2904-4719-a157-16b323c91530" 00:13:55.070 } 00:13:55.070 ] 00:13:55.070 }, 00:13:55.070 { 00:13:55.070 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:55.070 "subtype": "NVMe", 00:13:55.070 "listen_addresses": [ 00:13:55.070 { 00:13:55.070 "trtype": "VFIOUSER", 00:13:55.070 "adrfam": "IPv4", 00:13:55.070 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:55.070 "trsvcid": "0" 00:13:55.070 } 00:13:55.070 ], 00:13:55.070 "allow_any_host": true, 00:13:55.070 "hosts": [], 00:13:55.070 "serial_number": "SPDK2", 00:13:55.070 "model_number": "SPDK bdev Controller", 00:13:55.070 "max_namespaces": 32, 00:13:55.070 "min_cntlid": 1, 00:13:55.070 "max_cntlid": 65519, 00:13:55.070 "namespaces": [ 00:13:55.070 { 00:13:55.070 "nsid": 1, 00:13:55.070 "bdev_name": "Malloc2", 00:13:55.070 "name": "Malloc2", 00:13:55.070 "nguid": "8E9E39DC8C2B467B91ACFDD4DAF72108", 00:13:55.070 "uuid": "8e9e39dc-8c2b-467b-91ac-fdd4daf72108" 00:13:55.070 } 00:13:55.070 ] 00:13:55.070 } 00:13:55.070 ] 00:13:55.070 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:55.070 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1229158 00:13:55.070 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:55.070 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:55.070 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:55.070 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:55.070 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:55.070 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:55.070 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:55.070 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:55.070 [2024-10-08 18:21:13.309761] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.328 Malloc4 00:13:55.328 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:55.586 [2024-10-08 18:21:13.718937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.586 18:21:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:55.586 Asynchronous Event Request test 00:13:55.586 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.586 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.586 Registering asynchronous event callbacks... 00:13:55.586 Starting namespace attribute notice tests for all controllers... 00:13:55.586 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:55.586 aer_cb - Changed Namespace 00:13:55.586 Cleaning up... 00:13:55.843 [ 00:13:55.843 { 00:13:55.843 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:55.843 "subtype": "Discovery", 00:13:55.843 "listen_addresses": [], 00:13:55.844 "allow_any_host": true, 00:13:55.844 "hosts": [] 00:13:55.844 }, 00:13:55.844 { 00:13:55.844 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:55.844 "subtype": "NVMe", 00:13:55.844 "listen_addresses": [ 00:13:55.844 { 00:13:55.844 "trtype": "VFIOUSER", 00:13:55.844 "adrfam": "IPv4", 00:13:55.844 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:55.844 "trsvcid": "0" 00:13:55.844 } 00:13:55.844 ], 00:13:55.844 "allow_any_host": true, 00:13:55.844 "hosts": [], 00:13:55.844 "serial_number": "SPDK1", 00:13:55.844 "model_number": "SPDK bdev Controller", 00:13:55.844 "max_namespaces": 32, 00:13:55.844 "min_cntlid": 1, 00:13:55.844 "max_cntlid": 65519, 00:13:55.844 "namespaces": [ 00:13:55.844 { 00:13:55.844 "nsid": 1, 00:13:55.844 "bdev_name": "Malloc1", 00:13:55.844 "name": "Malloc1", 00:13:55.844 "nguid": "4F73AD7C9778419DB35820F28C963A88", 00:13:55.844 "uuid": "4f73ad7c-9778-419d-b358-20f28c963a88" 00:13:55.844 }, 00:13:55.844 { 00:13:55.844 "nsid": 2, 00:13:55.844 "bdev_name": "Malloc3", 00:13:55.844 "name": "Malloc3", 00:13:55.844 "nguid": "CE91289229044719A15716B323C91530", 00:13:55.844 "uuid": "ce912892-2904-4719-a157-16b323c91530" 00:13:55.844 } 00:13:55.844 ] 00:13:55.844 }, 00:13:55.844 { 00:13:55.844 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:55.844 "subtype": "NVMe", 00:13:55.844 "listen_addresses": [ 00:13:55.844 { 00:13:55.844 "trtype": "VFIOUSER", 00:13:55.844 "adrfam": "IPv4", 00:13:55.844 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:55.844 "trsvcid": "0" 00:13:55.844 } 00:13:55.844 ], 00:13:55.844 "allow_any_host": true, 00:13:55.844 "hosts": [], 00:13:55.844 "serial_number": "SPDK2", 00:13:55.844 "model_number": "SPDK bdev Controller", 00:13:55.844 "max_namespaces": 32, 00:13:55.844 "min_cntlid": 1, 00:13:55.844 "max_cntlid": 65519, 00:13:55.844 "namespaces": [ 00:13:55.844 { 00:13:55.844 "nsid": 1, 00:13:55.844 "bdev_name": "Malloc2", 00:13:55.844 "name": "Malloc2", 00:13:55.844 "nguid": "8E9E39DC8C2B467B91ACFDD4DAF72108", 00:13:55.844 "uuid": "8e9e39dc-8c2b-467b-91ac-fdd4daf72108" 00:13:55.844 }, 00:13:55.844 { 00:13:55.844 "nsid": 2, 00:13:55.844 "bdev_name": "Malloc4", 00:13:55.844 "name": "Malloc4", 00:13:55.844 "nguid": "A213E074EB5E4B019E9B9EBA0F67D8B7", 00:13:55.844 "uuid": "a213e074-eb5e-4b01-9e9b-9eba0f67d8b7" 00:13:55.844 } 00:13:55.844 ] 00:13:55.844 } 00:13:55.844 ] 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1229158 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1222932 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1222932 ']' 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1222932 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1222932 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1222932' 00:13:55.844 killing process with pid 1222932 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1222932 00:13:55.844 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1222932 00:13:56.409 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:56.409 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:56.409 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:56.409 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1229312 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1229312' 00:13:56.410 Process pid: 1229312 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1229312 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1229312 ']' 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.410 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:56.410 [2024-10-08 18:21:14.494962] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:56.410 [2024-10-08 18:21:14.496052] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:13:56.410 [2024-10-08 18:21:14.496126] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.410 [2024-10-08 18:21:14.556618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.410 [2024-10-08 18:21:14.663549] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.410 [2024-10-08 18:21:14.663619] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.410 [2024-10-08 18:21:14.663647] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.410 [2024-10-08 18:21:14.663659] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.410 [2024-10-08 18:21:14.663669] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.410 [2024-10-08 18:21:14.665148] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.410 [2024-10-08 18:21:14.665213] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.410 [2024-10-08 18:21:14.665280] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.410 [2024-10-08 18:21:14.665283] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.668 [2024-10-08 18:21:14.760079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:56.668 [2024-10-08 18:21:14.760308] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:56.669 [2024-10-08 18:21:14.760551] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:56.669 [2024-10-08 18:21:14.761139] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:56.669 [2024-10-08 18:21:14.761388] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:56.669 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.669 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:13:56.669 18:21:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:57.604 18:21:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:57.862 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:57.862 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:57.862 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:57.862 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:57.862 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:58.121 Malloc1 00:13:58.121 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:58.688 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:58.946 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:59.204 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:59.204 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:59.204 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:59.462 Malloc2 00:13:59.462 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:59.720 18:21:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:59.977 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1229312 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1229312 ']' 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1229312 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1229312 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1229312' 00:14:00.234 killing process with pid 1229312 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1229312 00:14:00.234 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1229312 00:14:00.492 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:00.492 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:00.492 00:14:00.492 real 0m53.517s 00:14:00.492 user 3m26.154s 00:14:00.492 sys 0m3.973s 00:14:00.492 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.492 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:00.492 ************************************ 00:14:00.492 END TEST nvmf_vfio_user 00:14:00.492 ************************************ 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.751 ************************************ 00:14:00.751 START TEST nvmf_vfio_user_nvme_compliance 00:14:00.751 ************************************ 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:00.751 * Looking for test storage... 00:14:00.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.751 --rc genhtml_branch_coverage=1 00:14:00.751 --rc genhtml_function_coverage=1 00:14:00.751 --rc genhtml_legend=1 00:14:00.751 --rc geninfo_all_blocks=1 00:14:00.751 --rc geninfo_unexecuted_blocks=1 00:14:00.751 00:14:00.751 ' 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.751 --rc genhtml_branch_coverage=1 00:14:00.751 --rc genhtml_function_coverage=1 00:14:00.751 --rc genhtml_legend=1 00:14:00.751 --rc geninfo_all_blocks=1 00:14:00.751 --rc geninfo_unexecuted_blocks=1 00:14:00.751 00:14:00.751 ' 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.751 --rc genhtml_branch_coverage=1 00:14:00.751 --rc genhtml_function_coverage=1 00:14:00.751 --rc genhtml_legend=1 00:14:00.751 --rc geninfo_all_blocks=1 00:14:00.751 --rc geninfo_unexecuted_blocks=1 00:14:00.751 00:14:00.751 ' 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:00.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.751 --rc genhtml_branch_coverage=1 00:14:00.751 --rc genhtml_function_coverage=1 00:14:00.751 --rc genhtml_legend=1 00:14:00.751 --rc geninfo_all_blocks=1 00:14:00.751 --rc geninfo_unexecuted_blocks=1 00:14:00.751 00:14:00.751 ' 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.751 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:00.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1229934 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1229934' 00:14:00.752 Process pid: 1229934 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1229934 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1229934 ']' 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.752 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:00.752 [2024-10-08 18:21:19.022630] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:14:00.752 [2024-10-08 18:21:19.022711] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.011 [2024-10-08 18:21:19.085136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:01.011 [2024-10-08 18:21:19.194950] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.011 [2024-10-08 18:21:19.195017] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.011 [2024-10-08 18:21:19.195046] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.011 [2024-10-08 18:21:19.195057] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.011 [2024-10-08 18:21:19.195066] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.011 [2024-10-08 18:21:19.195988] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.011 [2024-10-08 18:21:19.196055] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.011 [2024-10-08 18:21:19.196058] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.269 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:01.269 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:14:01.269 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.203 malloc0 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.203 18:21:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:02.461 00:14:02.461 00:14:02.461 CUnit - A unit testing framework for C - Version 2.1-3 00:14:02.461 http://cunit.sourceforge.net/ 00:14:02.461 00:14:02.461 00:14:02.461 Suite: nvme_compliance 00:14:02.461 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-08 18:21:20.536855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.461 [2024-10-08 18:21:20.538312] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:02.461 [2024-10-08 18:21:20.538355] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:02.461 [2024-10-08 18:21:20.538368] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:02.461 [2024-10-08 18:21:20.539879] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.461 passed 00:14:02.461 Test: admin_identify_ctrlr_verify_fused ...[2024-10-08 18:21:20.628468] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.461 [2024-10-08 18:21:20.631491] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.461 passed 00:14:02.461 Test: admin_identify_ns ...[2024-10-08 18:21:20.722926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.719 [2024-10-08 18:21:20.782309] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:02.719 [2024-10-08 18:21:20.790319] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:02.719 [2024-10-08 18:21:20.811459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.719 passed 00:14:02.719 Test: admin_get_features_mandatory_features ...[2024-10-08 18:21:20.899046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.719 [2024-10-08 18:21:20.902065] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.719 passed 00:14:02.719 Test: admin_get_features_optional_features ...[2024-10-08 18:21:20.989717] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.719 [2024-10-08 18:21:20.993743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.977 passed 00:14:02.977 Test: admin_set_features_number_of_queues ...[2024-10-08 18:21:21.080140] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.977 [2024-10-08 18:21:21.182398] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.977 passed 00:14:03.235 Test: admin_get_log_page_mandatory_logs ...[2024-10-08 18:21:21.269453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.235 [2024-10-08 18:21:21.272478] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.235 passed 00:14:03.235 Test: admin_get_log_page_with_lpo ...[2024-10-08 18:21:21.358103] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.235 [2024-10-08 18:21:21.429303] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:03.235 [2024-10-08 18:21:21.442384] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.235 passed 00:14:03.493 Test: fabric_property_get ...[2024-10-08 18:21:21.524090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.493 [2024-10-08 18:21:21.525456] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:03.493 [2024-10-08 18:21:21.529103] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.493 passed 00:14:03.493 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-08 18:21:21.614692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.493 [2024-10-08 18:21:21.616000] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:03.493 [2024-10-08 18:21:21.618713] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.493 passed 00:14:03.493 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-08 18:21:21.701034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.751 [2024-10-08 18:21:21.784302] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:03.751 [2024-10-08 18:21:21.800314] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:03.751 [2024-10-08 18:21:21.805407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.751 passed 00:14:03.751 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-08 18:21:21.892527] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.751 [2024-10-08 18:21:21.893848] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:03.751 [2024-10-08 18:21:21.895554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.751 passed 00:14:03.751 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-08 18:21:21.977635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.009 [2024-10-08 18:21:22.053313] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:04.009 [2024-10-08 18:21:22.077299] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:04.009 [2024-10-08 18:21:22.082420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.009 passed 00:14:04.009 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-08 18:21:22.169461] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.009 [2024-10-08 18:21:22.170783] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:04.009 [2024-10-08 18:21:22.170836] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:04.009 [2024-10-08 18:21:22.172479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.009 passed 00:14:04.009 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-08 18:21:22.256838] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.267 [2024-10-08 18:21:22.348301] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:04.267 [2024-10-08 18:21:22.356301] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:04.267 [2024-10-08 18:21:22.364300] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:04.267 [2024-10-08 18:21:22.372301] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:04.267 [2024-10-08 18:21:22.401411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.267 passed 00:14:04.267 Test: admin_create_io_sq_verify_pc ...[2024-10-08 18:21:22.484930] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:04.267 [2024-10-08 18:21:22.501326] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:04.267 [2024-10-08 18:21:22.519344] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:04.267 passed 00:14:04.525 Test: admin_create_io_qp_max_qps ...[2024-10-08 18:21:22.600902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.458 [2024-10-08 18:21:23.703319] nvme_ctrlr.c:5535:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:06.025 [2024-10-08 18:21:24.090655] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.025 passed 00:14:06.025 Test: admin_create_io_sq_shared_cq ...[2024-10-08 18:21:24.176943] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:06.025 [2024-10-08 18:21:24.307310] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:06.283 [2024-10-08 18:21:24.344392] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:06.283 passed 00:14:06.283 00:14:06.283 Run Summary: Type Total Ran Passed Failed Inactive 00:14:06.283 suites 1 1 n/a 0 0 00:14:06.283 tests 18 18 18 0 0 00:14:06.283 asserts 360 360 360 0 n/a 00:14:06.283 00:14:06.283 Elapsed time = 1.582 seconds 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1229934 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1229934 ']' 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1229934 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1229934 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1229934' 00:14:06.283 killing process with pid 1229934 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1229934 00:14:06.283 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1229934 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:06.541 00:14:06.541 real 0m5.896s 00:14:06.541 user 0m16.343s 00:14:06.541 sys 0m0.565s 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:06.541 ************************************ 00:14:06.541 END TEST nvmf_vfio_user_nvme_compliance 00:14:06.541 ************************************ 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.541 ************************************ 00:14:06.541 START TEST nvmf_vfio_user_fuzz 00:14:06.541 ************************************ 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:06.541 * Looking for test storage... 00:14:06.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:14:06.541 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.800 --rc genhtml_branch_coverage=1 00:14:06.800 --rc genhtml_function_coverage=1 00:14:06.800 --rc genhtml_legend=1 00:14:06.800 --rc geninfo_all_blocks=1 00:14:06.800 --rc geninfo_unexecuted_blocks=1 00:14:06.800 00:14:06.800 ' 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.800 --rc genhtml_branch_coverage=1 00:14:06.800 --rc genhtml_function_coverage=1 00:14:06.800 --rc genhtml_legend=1 00:14:06.800 --rc geninfo_all_blocks=1 00:14:06.800 --rc geninfo_unexecuted_blocks=1 00:14:06.800 00:14:06.800 ' 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.800 --rc genhtml_branch_coverage=1 00:14:06.800 --rc genhtml_function_coverage=1 00:14:06.800 --rc genhtml_legend=1 00:14:06.800 --rc geninfo_all_blocks=1 00:14:06.800 --rc geninfo_unexecuted_blocks=1 00:14:06.800 00:14:06.800 ' 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:06.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.800 --rc genhtml_branch_coverage=1 00:14:06.800 --rc genhtml_function_coverage=1 00:14:06.800 --rc genhtml_legend=1 00:14:06.800 --rc geninfo_all_blocks=1 00:14:06.800 --rc geninfo_unexecuted_blocks=1 00:14:06.800 00:14:06.800 ' 00:14:06.800 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1230754 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1230754' 00:14:06.801 Process pid: 1230754 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1230754 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1230754 ']' 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.801 18:21:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:07.060 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.060 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:14:07.060 18:21:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:07.994 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:07.994 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.994 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:07.994 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.994 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:07.994 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:07.994 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.994 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.252 malloc0 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:08.252 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:40.319 Fuzzing completed. Shutting down the fuzz application 00:14:40.319 00:14:40.319 Dumping successful admin opcodes: 00:14:40.319 8, 9, 10, 24, 00:14:40.319 Dumping successful io opcodes: 00:14:40.319 0, 00:14:40.319 NS: 0x200003a1ef00 I/O qp, Total commands completed: 657664, total successful commands: 2559, random_seed: 1767514688 00:14:40.319 NS: 0x200003a1ef00 admin qp, Total commands completed: 84216, total successful commands: 670, random_seed: 243279296 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1230754 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1230754 ']' 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1230754 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1230754 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1230754' 00:14:40.319 killing process with pid 1230754 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1230754 00:14:40.319 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1230754 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:40.319 00:14:40.319 real 0m32.408s 00:14:40.319 user 0m32.186s 00:14:40.319 sys 0m28.016s 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:40.319 ************************************ 00:14:40.319 END TEST nvmf_vfio_user_fuzz 00:14:40.319 ************************************ 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.319 ************************************ 00:14:40.319 START TEST nvmf_auth_target 00:14:40.319 ************************************ 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:40.319 * Looking for test storage... 00:14:40.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:40.319 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:40.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.320 --rc genhtml_branch_coverage=1 00:14:40.320 --rc genhtml_function_coverage=1 00:14:40.320 --rc genhtml_legend=1 00:14:40.320 --rc geninfo_all_blocks=1 00:14:40.320 --rc geninfo_unexecuted_blocks=1 00:14:40.320 00:14:40.320 ' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:40.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.320 --rc genhtml_branch_coverage=1 00:14:40.320 --rc genhtml_function_coverage=1 00:14:40.320 --rc genhtml_legend=1 00:14:40.320 --rc geninfo_all_blocks=1 00:14:40.320 --rc geninfo_unexecuted_blocks=1 00:14:40.320 00:14:40.320 ' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:40.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.320 --rc genhtml_branch_coverage=1 00:14:40.320 --rc genhtml_function_coverage=1 00:14:40.320 --rc genhtml_legend=1 00:14:40.320 --rc geninfo_all_blocks=1 00:14:40.320 --rc geninfo_unexecuted_blocks=1 00:14:40.320 00:14:40.320 ' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:40.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:40.320 --rc genhtml_branch_coverage=1 00:14:40.320 --rc genhtml_function_coverage=1 00:14:40.320 --rc genhtml_legend=1 00:14:40.320 --rc geninfo_all_blocks=1 00:14:40.320 --rc geninfo_unexecuted_blocks=1 00:14:40.320 00:14:40.320 ' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:40.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:40.320 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.255 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:41.256 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:41.256 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:41.256 Found net devices under 0000:09:00.0: cvl_0_0 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:41.256 Found net devices under 0000:09:00.1: cvl_0_1 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:41.256 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.514 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.514 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.514 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.514 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:41.514 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:41.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:14:41.515 00:14:41.515 --- 10.0.0.2 ping statistics --- 00:14:41.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.515 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:14:41.515 00:14:41.515 --- 10.0.0.1 ping statistics --- 00:14:41.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.515 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1236221 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1236221 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1236221 ']' 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.515 18:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1236240 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=fa3820fdf99f1e712ac10ae355257daf826e3fba79081d8d 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.pW0 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key fa3820fdf99f1e712ac10ae355257daf826e3fba79081d8d 0 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 fa3820fdf99f1e712ac10ae355257daf826e3fba79081d8d 0 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=fa3820fdf99f1e712ac10ae355257daf826e3fba79081d8d 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:14:41.773 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.pW0 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.pW0 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.pW0 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=e79383aec47f390f5626d4e8ae6617bb7891e5563bb19ed35feadd8ea8bb9476 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.b3c 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key e79383aec47f390f5626d4e8ae6617bb7891e5563bb19ed35feadd8ea8bb9476 3 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 e79383aec47f390f5626d4e8ae6617bb7891e5563bb19ed35feadd8ea8bb9476 3 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=e79383aec47f390f5626d4e8ae6617bb7891e5563bb19ed35feadd8ea8bb9476 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.b3c 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.b3c 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.b3c 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5472c447621e24144e3be6ebd1fd90d8 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.4Kf 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5472c447621e24144e3be6ebd1fd90d8 1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5472c447621e24144e3be6ebd1fd90d8 1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5472c447621e24144e3be6ebd1fd90d8 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.4Kf 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.4Kf 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.4Kf 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4d81efa97312ecdef4a432e5992835874abd986d653fb27d 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Qq7 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4d81efa97312ecdef4a432e5992835874abd986d653fb27d 2 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4d81efa97312ecdef4a432e5992835874abd986d653fb27d 2 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4d81efa97312ecdef4a432e5992835874abd986d653fb27d 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Qq7 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Qq7 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Qq7 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f39a646a2b8f3618346a65f38335517042f9203824252721 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.tST 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f39a646a2b8f3618346a65f38335517042f9203824252721 2 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f39a646a2b8f3618346a65f38335517042f9203824252721 2 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f39a646a2b8f3618346a65f38335517042f9203824252721 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.tST 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.tST 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.tST 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=dbb2cab3574b8bcff3f26d349aa41f2d 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.g6G 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key dbb2cab3574b8bcff3f26d349aa41f2d 1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 dbb2cab3574b8bcff3f26d349aa41f2d 1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=dbb2cab3574b8bcff3f26d349aa41f2d 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:14:42.033 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.g6G 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.g6G 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.g6G 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=10a825479c4a75fbe91bbb78afb51392c6139a8045f18c860461ed46c815c488 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.47e 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 10a825479c4a75fbe91bbb78afb51392c6139a8045f18c860461ed46c815c488 3 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 10a825479c4a75fbe91bbb78afb51392c6139a8045f18c860461ed46c815c488 3 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=10a825479c4a75fbe91bbb78afb51392c6139a8045f18c860461ed46c815c488 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.47e 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.47e 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.47e 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1236221 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1236221 ']' 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.292 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.293 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.293 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.551 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.551 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:42.551 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1236240 /var/tmp/host.sock 00:14:42.551 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1236240 ']' 00:14:42.551 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:42.551 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.551 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:42.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:42.551 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.551 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pW0 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pW0 00:14:42.809 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pW0 00:14:43.067 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.b3c ]] 00:14:43.067 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b3c 00:14:43.067 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.067 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.067 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.067 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b3c 00:14:43.067 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b3c 00:14:43.325 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:43.325 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4Kf 00:14:43.325 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.325 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.325 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.325 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.4Kf 00:14:43.325 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.4Kf 00:14:43.583 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Qq7 ]] 00:14:43.583 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qq7 00:14:43.583 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.583 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.583 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.583 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qq7 00:14:43.583 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qq7 00:14:43.841 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:43.841 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.tST 00:14:43.841 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.841 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.841 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.841 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.tST 00:14:43.841 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.tST 00:14:44.099 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.g6G ]] 00:14:44.099 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.g6G 00:14:44.099 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.099 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.099 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.099 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.g6G 00:14:44.099 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.g6G 00:14:44.357 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:44.357 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.47e 00:14:44.357 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.357 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.357 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.357 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.47e 00:14:44.357 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.47e 00:14:44.615 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:44.615 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:44.615 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:44.615 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.615 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:44.615 18:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.181 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.439 00:14:45.439 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.439 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.439 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.697 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.697 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.697 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.697 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.697 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.697 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.697 { 00:14:45.697 "cntlid": 1, 00:14:45.697 "qid": 0, 00:14:45.697 "state": "enabled", 00:14:45.697 "thread": "nvmf_tgt_poll_group_000", 00:14:45.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:45.697 "listen_address": { 00:14:45.697 "trtype": "TCP", 00:14:45.697 "adrfam": "IPv4", 00:14:45.697 "traddr": "10.0.0.2", 00:14:45.697 "trsvcid": "4420" 00:14:45.697 }, 00:14:45.697 "peer_address": { 00:14:45.697 "trtype": "TCP", 00:14:45.697 "adrfam": "IPv4", 00:14:45.697 "traddr": "10.0.0.1", 00:14:45.697 "trsvcid": "46150" 00:14:45.697 }, 00:14:45.697 "auth": { 00:14:45.698 "state": "completed", 00:14:45.698 "digest": "sha256", 00:14:45.698 "dhgroup": "null" 00:14:45.698 } 00:14:45.698 } 00:14:45.698 ]' 00:14:45.698 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.698 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.698 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.698 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:45.698 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.698 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.698 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.698 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.959 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:14:45.959 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:14:46.892 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.892 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:46.892 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.892 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.892 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.892 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.892 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:46.892 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.150 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.408 00:14:47.408 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.408 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.408 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.666 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.666 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.666 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.666 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.666 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.666 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.666 { 00:14:47.666 "cntlid": 3, 00:14:47.666 "qid": 0, 00:14:47.666 "state": "enabled", 00:14:47.666 "thread": "nvmf_tgt_poll_group_000", 00:14:47.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:47.666 "listen_address": { 00:14:47.666 "trtype": "TCP", 00:14:47.666 "adrfam": "IPv4", 00:14:47.666 "traddr": "10.0.0.2", 00:14:47.666 "trsvcid": "4420" 00:14:47.666 }, 00:14:47.666 "peer_address": { 00:14:47.666 "trtype": "TCP", 00:14:47.666 "adrfam": "IPv4", 00:14:47.666 "traddr": "10.0.0.1", 00:14:47.666 "trsvcid": "46180" 00:14:47.666 }, 00:14:47.666 "auth": { 00:14:47.666 "state": "completed", 00:14:47.666 "digest": "sha256", 00:14:47.666 "dhgroup": "null" 00:14:47.666 } 00:14:47.666 } 00:14:47.666 ]' 00:14:47.666 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.923 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.923 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.923 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:47.923 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.924 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.924 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.924 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.182 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:14:48.182 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:14:49.115 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.115 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:49.115 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.115 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.115 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.115 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.115 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:49.115 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.372 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.630 00:14:49.630 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.630 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.630 18:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.898 { 00:14:49.898 "cntlid": 5, 00:14:49.898 "qid": 0, 00:14:49.898 "state": "enabled", 00:14:49.898 "thread": "nvmf_tgt_poll_group_000", 00:14:49.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:49.898 "listen_address": { 00:14:49.898 "trtype": "TCP", 00:14:49.898 "adrfam": "IPv4", 00:14:49.898 "traddr": "10.0.0.2", 00:14:49.898 "trsvcid": "4420" 00:14:49.898 }, 00:14:49.898 "peer_address": { 00:14:49.898 "trtype": "TCP", 00:14:49.898 "adrfam": "IPv4", 00:14:49.898 "traddr": "10.0.0.1", 00:14:49.898 "trsvcid": "46204" 00:14:49.898 }, 00:14:49.898 "auth": { 00:14:49.898 "state": "completed", 00:14:49.898 "digest": "sha256", 00:14:49.898 "dhgroup": "null" 00:14:49.898 } 00:14:49.898 } 00:14:49.898 ]' 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:49.898 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.155 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.155 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.155 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.413 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:14:50.413 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:14:51.347 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.347 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:51.347 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.347 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.347 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.347 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.347 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:51.347 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.605 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.863 00:14:51.863 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.863 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.863 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.121 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.121 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.121 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.121 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.121 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.121 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.121 { 00:14:52.121 "cntlid": 7, 00:14:52.121 "qid": 0, 00:14:52.121 "state": "enabled", 00:14:52.121 "thread": "nvmf_tgt_poll_group_000", 00:14:52.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:52.121 "listen_address": { 00:14:52.121 "trtype": "TCP", 00:14:52.121 "adrfam": "IPv4", 00:14:52.121 "traddr": "10.0.0.2", 00:14:52.121 "trsvcid": "4420" 00:14:52.121 }, 00:14:52.121 "peer_address": { 00:14:52.121 "trtype": "TCP", 00:14:52.121 "adrfam": "IPv4", 00:14:52.121 "traddr": "10.0.0.1", 00:14:52.121 "trsvcid": "46252" 00:14:52.122 }, 00:14:52.122 "auth": { 00:14:52.122 "state": "completed", 00:14:52.122 "digest": "sha256", 00:14:52.122 "dhgroup": "null" 00:14:52.122 } 00:14:52.122 } 00:14:52.122 ]' 00:14:52.122 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.122 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.122 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.122 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:52.122 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.122 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.122 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.122 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.379 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:14:52.380 18:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:14:53.313 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.313 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:53.313 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.313 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.313 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.313 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:53.313 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.313 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:53.313 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.571 18:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.137 00:14:54.137 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.137 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.137 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.396 { 00:14:54.396 "cntlid": 9, 00:14:54.396 "qid": 0, 00:14:54.396 "state": "enabled", 00:14:54.396 "thread": "nvmf_tgt_poll_group_000", 00:14:54.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:54.396 "listen_address": { 00:14:54.396 "trtype": "TCP", 00:14:54.396 "adrfam": "IPv4", 00:14:54.396 "traddr": "10.0.0.2", 00:14:54.396 "trsvcid": "4420" 00:14:54.396 }, 00:14:54.396 "peer_address": { 00:14:54.396 "trtype": "TCP", 00:14:54.396 "adrfam": "IPv4", 00:14:54.396 "traddr": "10.0.0.1", 00:14:54.396 "trsvcid": "46284" 00:14:54.396 }, 00:14:54.396 "auth": { 00:14:54.396 "state": "completed", 00:14:54.396 "digest": "sha256", 00:14:54.396 "dhgroup": "ffdhe2048" 00:14:54.396 } 00:14:54.396 } 00:14:54.396 ]' 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.396 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.688 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:14:54.688 18:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:14:55.667 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.667 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:55.667 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.667 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.667 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.667 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.667 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:55.667 18:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.925 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.184 00:14:56.184 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.184 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.184 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.442 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.442 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.442 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.442 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.442 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.442 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.442 { 00:14:56.442 "cntlid": 11, 00:14:56.442 "qid": 0, 00:14:56.442 "state": "enabled", 00:14:56.442 "thread": "nvmf_tgt_poll_group_000", 00:14:56.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:56.442 "listen_address": { 00:14:56.442 "trtype": "TCP", 00:14:56.442 "adrfam": "IPv4", 00:14:56.442 "traddr": "10.0.0.2", 00:14:56.442 "trsvcid": "4420" 00:14:56.442 }, 00:14:56.442 "peer_address": { 00:14:56.442 "trtype": "TCP", 00:14:56.442 "adrfam": "IPv4", 00:14:56.442 "traddr": "10.0.0.1", 00:14:56.442 "trsvcid": "43746" 00:14:56.442 }, 00:14:56.442 "auth": { 00:14:56.442 "state": "completed", 00:14:56.442 "digest": "sha256", 00:14:56.442 "dhgroup": "ffdhe2048" 00:14:56.442 } 00:14:56.442 } 00:14:56.442 ]' 00:14:56.442 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.700 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.700 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.700 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.700 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.700 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.700 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.700 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.958 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:14:56.958 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:14:57.891 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.891 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:57.891 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.891 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.891 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.891 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.891 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:57.891 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.149 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.407 00:14:58.407 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.407 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.407 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.665 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.665 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.665 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.665 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.665 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.665 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.665 { 00:14:58.665 "cntlid": 13, 00:14:58.665 "qid": 0, 00:14:58.665 "state": "enabled", 00:14:58.665 "thread": "nvmf_tgt_poll_group_000", 00:14:58.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:58.665 "listen_address": { 00:14:58.665 "trtype": "TCP", 00:14:58.665 "adrfam": "IPv4", 00:14:58.665 "traddr": "10.0.0.2", 00:14:58.665 "trsvcid": "4420" 00:14:58.665 }, 00:14:58.665 "peer_address": { 00:14:58.665 "trtype": "TCP", 00:14:58.665 "adrfam": "IPv4", 00:14:58.665 "traddr": "10.0.0.1", 00:14:58.665 "trsvcid": "43774" 00:14:58.665 }, 00:14:58.665 "auth": { 00:14:58.665 "state": "completed", 00:14:58.665 "digest": "sha256", 00:14:58.665 "dhgroup": "ffdhe2048" 00:14:58.665 } 00:14:58.665 } 00:14:58.665 ]' 00:14:58.665 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.923 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.923 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.923 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.923 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.923 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.923 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.923 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.181 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:14:59.181 18:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:00.113 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.114 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.114 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.114 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.114 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.114 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.114 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:00.114 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.370 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:00.936 00:15:00.936 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.936 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.936 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.193 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.193 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.193 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.193 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.193 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.193 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.193 { 00:15:01.193 "cntlid": 15, 00:15:01.193 "qid": 0, 00:15:01.193 "state": "enabled", 00:15:01.193 "thread": "nvmf_tgt_poll_group_000", 00:15:01.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:01.193 "listen_address": { 00:15:01.193 "trtype": "TCP", 00:15:01.193 "adrfam": "IPv4", 00:15:01.193 "traddr": "10.0.0.2", 00:15:01.193 "trsvcid": "4420" 00:15:01.193 }, 00:15:01.193 "peer_address": { 00:15:01.193 "trtype": "TCP", 00:15:01.193 "adrfam": "IPv4", 00:15:01.193 "traddr": "10.0.0.1", 00:15:01.193 "trsvcid": "43816" 00:15:01.194 }, 00:15:01.194 "auth": { 00:15:01.194 "state": "completed", 00:15:01.194 "digest": "sha256", 00:15:01.194 "dhgroup": "ffdhe2048" 00:15:01.194 } 00:15:01.194 } 00:15:01.194 ]' 00:15:01.194 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.194 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.194 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.194 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:01.194 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.194 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.194 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.194 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.452 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:01.452 18:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:02.385 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.385 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:02.385 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.385 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.385 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.385 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:02.385 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.385 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:02.386 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.643 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.209 00:15:03.209 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.209 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.209 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.467 { 00:15:03.467 "cntlid": 17, 00:15:03.467 "qid": 0, 00:15:03.467 "state": "enabled", 00:15:03.467 "thread": "nvmf_tgt_poll_group_000", 00:15:03.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:03.467 "listen_address": { 00:15:03.467 "trtype": "TCP", 00:15:03.467 "adrfam": "IPv4", 00:15:03.467 "traddr": "10.0.0.2", 00:15:03.467 "trsvcid": "4420" 00:15:03.467 }, 00:15:03.467 "peer_address": { 00:15:03.467 "trtype": "TCP", 00:15:03.467 "adrfam": "IPv4", 00:15:03.467 "traddr": "10.0.0.1", 00:15:03.467 "trsvcid": "43836" 00:15:03.467 }, 00:15:03.467 "auth": { 00:15:03.467 "state": "completed", 00:15:03.467 "digest": "sha256", 00:15:03.467 "dhgroup": "ffdhe3072" 00:15:03.467 } 00:15:03.467 } 00:15:03.467 ]' 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.467 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.725 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:03.725 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:04.658 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.658 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:04.658 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.658 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.658 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.658 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.658 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.658 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.916 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.173 00:15:05.173 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.173 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.173 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.739 { 00:15:05.739 "cntlid": 19, 00:15:05.739 "qid": 0, 00:15:05.739 "state": "enabled", 00:15:05.739 "thread": "nvmf_tgt_poll_group_000", 00:15:05.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:05.739 "listen_address": { 00:15:05.739 "trtype": "TCP", 00:15:05.739 "adrfam": "IPv4", 00:15:05.739 "traddr": "10.0.0.2", 00:15:05.739 "trsvcid": "4420" 00:15:05.739 }, 00:15:05.739 "peer_address": { 00:15:05.739 "trtype": "TCP", 00:15:05.739 "adrfam": "IPv4", 00:15:05.739 "traddr": "10.0.0.1", 00:15:05.739 "trsvcid": "60928" 00:15:05.739 }, 00:15:05.739 "auth": { 00:15:05.739 "state": "completed", 00:15:05.739 "digest": "sha256", 00:15:05.739 "dhgroup": "ffdhe3072" 00:15:05.739 } 00:15:05.739 } 00:15:05.739 ]' 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.739 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.997 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:05.997 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:06.930 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.930 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:06.930 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.930 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.930 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.930 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.930 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:06.930 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.188 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.446 00:15:07.446 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.446 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.446 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.703 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.703 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.704 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.704 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.704 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.704 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.704 { 00:15:07.704 "cntlid": 21, 00:15:07.704 "qid": 0, 00:15:07.704 "state": "enabled", 00:15:07.704 "thread": "nvmf_tgt_poll_group_000", 00:15:07.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:07.704 "listen_address": { 00:15:07.704 "trtype": "TCP", 00:15:07.704 "adrfam": "IPv4", 00:15:07.704 "traddr": "10.0.0.2", 00:15:07.704 "trsvcid": "4420" 00:15:07.704 }, 00:15:07.704 "peer_address": { 00:15:07.704 "trtype": "TCP", 00:15:07.704 "adrfam": "IPv4", 00:15:07.704 "traddr": "10.0.0.1", 00:15:07.704 "trsvcid": "60956" 00:15:07.704 }, 00:15:07.704 "auth": { 00:15:07.704 "state": "completed", 00:15:07.704 "digest": "sha256", 00:15:07.704 "dhgroup": "ffdhe3072" 00:15:07.704 } 00:15:07.704 } 00:15:07.704 ]' 00:15:07.704 18:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.962 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.962 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.962 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:07.962 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.962 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.962 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.962 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.220 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:08.220 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:09.153 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.153 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:09.153 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.153 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.153 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.153 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.153 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:09.153 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:09.410 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.411 18:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.974 00:15:09.974 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.974 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.974 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.232 { 00:15:10.232 "cntlid": 23, 00:15:10.232 "qid": 0, 00:15:10.232 "state": "enabled", 00:15:10.232 "thread": "nvmf_tgt_poll_group_000", 00:15:10.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:10.232 "listen_address": { 00:15:10.232 "trtype": "TCP", 00:15:10.232 "adrfam": "IPv4", 00:15:10.232 "traddr": "10.0.0.2", 00:15:10.232 "trsvcid": "4420" 00:15:10.232 }, 00:15:10.232 "peer_address": { 00:15:10.232 "trtype": "TCP", 00:15:10.232 "adrfam": "IPv4", 00:15:10.232 "traddr": "10.0.0.1", 00:15:10.232 "trsvcid": "60992" 00:15:10.232 }, 00:15:10.232 "auth": { 00:15:10.232 "state": "completed", 00:15:10.232 "digest": "sha256", 00:15:10.232 "dhgroup": "ffdhe3072" 00:15:10.232 } 00:15:10.232 } 00:15:10.232 ]' 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.232 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.490 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:10.490 18:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:11.423 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.423 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:11.423 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.423 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.423 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.423 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:11.423 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.423 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:11.423 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.681 18:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.247 00:15:12.247 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.247 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.247 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.505 { 00:15:12.505 "cntlid": 25, 00:15:12.505 "qid": 0, 00:15:12.505 "state": "enabled", 00:15:12.505 "thread": "nvmf_tgt_poll_group_000", 00:15:12.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:12.505 "listen_address": { 00:15:12.505 "trtype": "TCP", 00:15:12.505 "adrfam": "IPv4", 00:15:12.505 "traddr": "10.0.0.2", 00:15:12.505 "trsvcid": "4420" 00:15:12.505 }, 00:15:12.505 "peer_address": { 00:15:12.505 "trtype": "TCP", 00:15:12.505 "adrfam": "IPv4", 00:15:12.505 "traddr": "10.0.0.1", 00:15:12.505 "trsvcid": "32786" 00:15:12.505 }, 00:15:12.505 "auth": { 00:15:12.505 "state": "completed", 00:15:12.505 "digest": "sha256", 00:15:12.505 "dhgroup": "ffdhe4096" 00:15:12.505 } 00:15:12.505 } 00:15:12.505 ]' 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.505 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.762 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:12.762 18:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:13.713 18:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.713 18:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:13.713 18:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.713 18:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.713 18:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.713 18:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.713 18:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:13.713 18:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.971 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.536 00:15:14.536 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.536 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.536 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.794 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.794 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.794 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.794 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.794 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.794 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.794 { 00:15:14.794 "cntlid": 27, 00:15:14.794 "qid": 0, 00:15:14.794 "state": "enabled", 00:15:14.794 "thread": "nvmf_tgt_poll_group_000", 00:15:14.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:14.794 "listen_address": { 00:15:14.794 "trtype": "TCP", 00:15:14.794 "adrfam": "IPv4", 00:15:14.794 "traddr": "10.0.0.2", 00:15:14.794 "trsvcid": "4420" 00:15:14.794 }, 00:15:14.794 "peer_address": { 00:15:14.794 "trtype": "TCP", 00:15:14.794 "adrfam": "IPv4", 00:15:14.794 "traddr": "10.0.0.1", 00:15:14.794 "trsvcid": "32810" 00:15:14.794 }, 00:15:14.794 "auth": { 00:15:14.794 "state": "completed", 00:15:14.794 "digest": "sha256", 00:15:14.794 "dhgroup": "ffdhe4096" 00:15:14.794 } 00:15:14.794 } 00:15:14.794 ]' 00:15:14.794 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.794 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.794 18:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.794 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:14.794 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.794 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.794 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.794 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.360 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:15.360 18:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:16.294 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.294 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:16.294 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.294 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.294 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.294 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.294 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.294 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.552 18:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.810 00:15:16.810 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.810 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.810 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.067 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.067 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.067 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.067 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.067 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.067 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.067 { 00:15:17.067 "cntlid": 29, 00:15:17.067 "qid": 0, 00:15:17.067 "state": "enabled", 00:15:17.067 "thread": "nvmf_tgt_poll_group_000", 00:15:17.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:17.067 "listen_address": { 00:15:17.067 "trtype": "TCP", 00:15:17.067 "adrfam": "IPv4", 00:15:17.067 "traddr": "10.0.0.2", 00:15:17.067 "trsvcid": "4420" 00:15:17.067 }, 00:15:17.067 "peer_address": { 00:15:17.067 "trtype": "TCP", 00:15:17.067 "adrfam": "IPv4", 00:15:17.067 "traddr": "10.0.0.1", 00:15:17.067 "trsvcid": "38274" 00:15:17.067 }, 00:15:17.067 "auth": { 00:15:17.067 "state": "completed", 00:15:17.067 "digest": "sha256", 00:15:17.067 "dhgroup": "ffdhe4096" 00:15:17.067 } 00:15:17.067 } 00:15:17.067 ]' 00:15:17.067 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.326 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.326 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.326 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.326 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.326 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.326 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.326 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.584 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:17.584 18:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:18.519 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.519 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:18.519 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.519 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.519 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.519 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.519 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:18.519 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.777 18:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.343 00:15:19.343 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.343 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.343 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.343 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.343 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.343 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.343 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.343 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.343 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.343 { 00:15:19.343 "cntlid": 31, 00:15:19.343 "qid": 0, 00:15:19.343 "state": "enabled", 00:15:19.343 "thread": "nvmf_tgt_poll_group_000", 00:15:19.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:19.343 "listen_address": { 00:15:19.343 "trtype": "TCP", 00:15:19.343 "adrfam": "IPv4", 00:15:19.343 "traddr": "10.0.0.2", 00:15:19.343 "trsvcid": "4420" 00:15:19.343 }, 00:15:19.343 "peer_address": { 00:15:19.343 "trtype": "TCP", 00:15:19.343 "adrfam": "IPv4", 00:15:19.343 "traddr": "10.0.0.1", 00:15:19.343 "trsvcid": "38300" 00:15:19.343 }, 00:15:19.344 "auth": { 00:15:19.344 "state": "completed", 00:15:19.344 "digest": "sha256", 00:15:19.344 "dhgroup": "ffdhe4096" 00:15:19.344 } 00:15:19.344 } 00:15:19.344 ]' 00:15:19.344 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.635 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.635 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.635 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:19.635 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.635 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.635 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.635 18:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.918 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:19.918 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:20.851 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.851 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:20.851 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.851 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.851 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.851 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:20.851 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.851 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:20.851 18:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.109 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.675 00:15:21.675 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.675 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.675 18:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.933 { 00:15:21.933 "cntlid": 33, 00:15:21.933 "qid": 0, 00:15:21.933 "state": "enabled", 00:15:21.933 "thread": "nvmf_tgt_poll_group_000", 00:15:21.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:21.933 "listen_address": { 00:15:21.933 "trtype": "TCP", 00:15:21.933 "adrfam": "IPv4", 00:15:21.933 "traddr": "10.0.0.2", 00:15:21.933 "trsvcid": "4420" 00:15:21.933 }, 00:15:21.933 "peer_address": { 00:15:21.933 "trtype": "TCP", 00:15:21.933 "adrfam": "IPv4", 00:15:21.933 "traddr": "10.0.0.1", 00:15:21.933 "trsvcid": "38328" 00:15:21.933 }, 00:15:21.933 "auth": { 00:15:21.933 "state": "completed", 00:15:21.933 "digest": "sha256", 00:15:21.933 "dhgroup": "ffdhe6144" 00:15:21.933 } 00:15:21.933 } 00:15:21.933 ]' 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.933 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.191 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:22.191 18:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:23.125 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.125 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:23.125 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.125 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.125 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.125 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.125 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:23.125 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.383 18:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.948 00:15:23.948 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.948 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.948 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.206 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.206 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.206 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.206 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.206 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.206 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.206 { 00:15:24.206 "cntlid": 35, 00:15:24.206 "qid": 0, 00:15:24.206 "state": "enabled", 00:15:24.206 "thread": "nvmf_tgt_poll_group_000", 00:15:24.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:24.206 "listen_address": { 00:15:24.206 "trtype": "TCP", 00:15:24.206 "adrfam": "IPv4", 00:15:24.206 "traddr": "10.0.0.2", 00:15:24.206 "trsvcid": "4420" 00:15:24.206 }, 00:15:24.206 "peer_address": { 00:15:24.206 "trtype": "TCP", 00:15:24.206 "adrfam": "IPv4", 00:15:24.206 "traddr": "10.0.0.1", 00:15:24.206 "trsvcid": "38358" 00:15:24.206 }, 00:15:24.206 "auth": { 00:15:24.206 "state": "completed", 00:15:24.206 "digest": "sha256", 00:15:24.206 "dhgroup": "ffdhe6144" 00:15:24.206 } 00:15:24.206 } 00:15:24.206 ]' 00:15:24.206 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.206 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.206 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.464 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:24.464 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.464 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.464 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.464 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.722 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:24.722 18:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.656 18:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.222 00:15:26.222 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.222 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.222 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.480 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.480 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.480 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.480 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.480 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.480 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.480 { 00:15:26.480 "cntlid": 37, 00:15:26.480 "qid": 0, 00:15:26.480 "state": "enabled", 00:15:26.480 "thread": "nvmf_tgt_poll_group_000", 00:15:26.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:26.480 "listen_address": { 00:15:26.480 "trtype": "TCP", 00:15:26.480 "adrfam": "IPv4", 00:15:26.480 "traddr": "10.0.0.2", 00:15:26.480 "trsvcid": "4420" 00:15:26.480 }, 00:15:26.480 "peer_address": { 00:15:26.480 "trtype": "TCP", 00:15:26.480 "adrfam": "IPv4", 00:15:26.480 "traddr": "10.0.0.1", 00:15:26.480 "trsvcid": "57664" 00:15:26.480 }, 00:15:26.480 "auth": { 00:15:26.480 "state": "completed", 00:15:26.480 "digest": "sha256", 00:15:26.480 "dhgroup": "ffdhe6144" 00:15:26.480 } 00:15:26.480 } 00:15:26.480 ]' 00:15:26.480 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.738 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.738 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.738 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:26.738 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.738 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.738 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.738 18:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.996 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:26.996 18:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:27.930 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.930 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:27.930 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.930 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.930 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.930 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.930 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:27.930 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.188 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.753 00:15:28.753 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.753 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.753 18:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.012 { 00:15:29.012 "cntlid": 39, 00:15:29.012 "qid": 0, 00:15:29.012 "state": "enabled", 00:15:29.012 "thread": "nvmf_tgt_poll_group_000", 00:15:29.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:29.012 "listen_address": { 00:15:29.012 "trtype": "TCP", 00:15:29.012 "adrfam": "IPv4", 00:15:29.012 "traddr": "10.0.0.2", 00:15:29.012 "trsvcid": "4420" 00:15:29.012 }, 00:15:29.012 "peer_address": { 00:15:29.012 "trtype": "TCP", 00:15:29.012 "adrfam": "IPv4", 00:15:29.012 "traddr": "10.0.0.1", 00:15:29.012 "trsvcid": "57692" 00:15:29.012 }, 00:15:29.012 "auth": { 00:15:29.012 "state": "completed", 00:15:29.012 "digest": "sha256", 00:15:29.012 "dhgroup": "ffdhe6144" 00:15:29.012 } 00:15:29.012 } 00:15:29.012 ]' 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.012 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.271 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.271 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.271 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.530 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:29.530 18:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:30.464 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.464 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:30.464 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.464 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.465 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.465 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.465 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.465 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.465 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.723 18:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.287 00:15:31.545 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.545 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.545 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.802 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.802 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.802 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.802 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.802 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.802 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.802 { 00:15:31.802 "cntlid": 41, 00:15:31.802 "qid": 0, 00:15:31.802 "state": "enabled", 00:15:31.802 "thread": "nvmf_tgt_poll_group_000", 00:15:31.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:31.802 "listen_address": { 00:15:31.802 "trtype": "TCP", 00:15:31.802 "adrfam": "IPv4", 00:15:31.802 "traddr": "10.0.0.2", 00:15:31.802 "trsvcid": "4420" 00:15:31.802 }, 00:15:31.802 "peer_address": { 00:15:31.802 "trtype": "TCP", 00:15:31.802 "adrfam": "IPv4", 00:15:31.802 "traddr": "10.0.0.1", 00:15:31.802 "trsvcid": "57722" 00:15:31.802 }, 00:15:31.802 "auth": { 00:15:31.803 "state": "completed", 00:15:31.803 "digest": "sha256", 00:15:31.803 "dhgroup": "ffdhe8192" 00:15:31.803 } 00:15:31.803 } 00:15:31.803 ]' 00:15:31.803 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.803 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.803 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.803 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:31.803 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.803 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.803 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.803 18:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.060 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:32.060 18:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:32.994 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.994 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.994 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.994 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.994 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.994 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.994 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:32.994 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.560 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.493 00:15:34.493 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.493 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.493 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.493 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.493 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.493 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.493 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.493 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.493 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.493 { 00:15:34.493 "cntlid": 43, 00:15:34.493 "qid": 0, 00:15:34.493 "state": "enabled", 00:15:34.493 "thread": "nvmf_tgt_poll_group_000", 00:15:34.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:34.493 "listen_address": { 00:15:34.493 "trtype": "TCP", 00:15:34.493 "adrfam": "IPv4", 00:15:34.493 "traddr": "10.0.0.2", 00:15:34.493 "trsvcid": "4420" 00:15:34.493 }, 00:15:34.493 "peer_address": { 00:15:34.493 "trtype": "TCP", 00:15:34.493 "adrfam": "IPv4", 00:15:34.493 "traddr": "10.0.0.1", 00:15:34.493 "trsvcid": "57754" 00:15:34.493 }, 00:15:34.493 "auth": { 00:15:34.493 "state": "completed", 00:15:34.493 "digest": "sha256", 00:15:34.493 "dhgroup": "ffdhe8192" 00:15:34.493 } 00:15:34.493 } 00:15:34.493 ]' 00:15:34.493 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.752 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:34.752 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.752 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:34.752 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.752 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.752 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.752 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.010 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:35.010 18:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:35.944 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.944 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:35.944 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.944 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.944 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.944 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.944 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:35.944 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.202 18:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.136 00:15:37.136 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:37.136 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.136 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.394 { 00:15:37.394 "cntlid": 45, 00:15:37.394 "qid": 0, 00:15:37.394 "state": "enabled", 00:15:37.394 "thread": "nvmf_tgt_poll_group_000", 00:15:37.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:37.394 "listen_address": { 00:15:37.394 "trtype": "TCP", 00:15:37.394 "adrfam": "IPv4", 00:15:37.394 "traddr": "10.0.0.2", 00:15:37.394 "trsvcid": "4420" 00:15:37.394 }, 00:15:37.394 "peer_address": { 00:15:37.394 "trtype": "TCP", 00:15:37.394 "adrfam": "IPv4", 00:15:37.394 "traddr": "10.0.0.1", 00:15:37.394 "trsvcid": "59740" 00:15:37.394 }, 00:15:37.394 "auth": { 00:15:37.394 "state": "completed", 00:15:37.394 "digest": "sha256", 00:15:37.394 "dhgroup": "ffdhe8192" 00:15:37.394 } 00:15:37.394 } 00:15:37.394 ]' 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.394 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.651 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:37.652 18:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:38.584 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.584 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:38.584 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.584 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.584 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.584 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.584 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.584 18:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.149 18:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:39.714 00:15:39.972 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.972 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.972 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.233 { 00:15:40.233 "cntlid": 47, 00:15:40.233 "qid": 0, 00:15:40.233 "state": "enabled", 00:15:40.233 "thread": "nvmf_tgt_poll_group_000", 00:15:40.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:40.233 "listen_address": { 00:15:40.233 "trtype": "TCP", 00:15:40.233 "adrfam": "IPv4", 00:15:40.233 "traddr": "10.0.0.2", 00:15:40.233 "trsvcid": "4420" 00:15:40.233 }, 00:15:40.233 "peer_address": { 00:15:40.233 "trtype": "TCP", 00:15:40.233 "adrfam": "IPv4", 00:15:40.233 "traddr": "10.0.0.1", 00:15:40.233 "trsvcid": "59762" 00:15:40.233 }, 00:15:40.233 "auth": { 00:15:40.233 "state": "completed", 00:15:40.233 "digest": "sha256", 00:15:40.233 "dhgroup": "ffdhe8192" 00:15:40.233 } 00:15:40.233 } 00:15:40.233 ]' 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.233 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.493 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:40.494 18:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:41.426 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.426 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:41.426 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.426 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.426 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.426 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:41.426 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.426 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.426 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:41.426 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:41.685 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:41.685 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.685 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.685 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:41.685 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:41.685 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.685 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.685 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.685 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.943 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.943 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.943 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.943 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.200 00:15:42.200 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.200 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.200 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.458 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.458 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.458 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.458 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.458 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.458 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.458 { 00:15:42.458 "cntlid": 49, 00:15:42.458 "qid": 0, 00:15:42.458 "state": "enabled", 00:15:42.458 "thread": "nvmf_tgt_poll_group_000", 00:15:42.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:42.458 "listen_address": { 00:15:42.458 "trtype": "TCP", 00:15:42.458 "adrfam": "IPv4", 00:15:42.458 "traddr": "10.0.0.2", 00:15:42.458 "trsvcid": "4420" 00:15:42.458 }, 00:15:42.458 "peer_address": { 00:15:42.458 "trtype": "TCP", 00:15:42.458 "adrfam": "IPv4", 00:15:42.458 "traddr": "10.0.0.1", 00:15:42.458 "trsvcid": "59780" 00:15:42.458 }, 00:15:42.458 "auth": { 00:15:42.458 "state": "completed", 00:15:42.458 "digest": "sha384", 00:15:42.458 "dhgroup": "null" 00:15:42.458 } 00:15:42.458 } 00:15:42.458 ]' 00:15:42.458 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.458 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.459 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.459 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:42.459 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.459 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.459 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.459 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.024 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:43.024 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:43.958 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.958 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:43.958 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.958 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.958 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.958 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.958 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:43.958 18:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.958 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.959 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.524 00:15:44.524 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.524 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.524 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.524 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.524 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.524 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.524 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.524 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.524 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.524 { 00:15:44.524 "cntlid": 51, 00:15:44.524 "qid": 0, 00:15:44.524 "state": "enabled", 00:15:44.524 "thread": "nvmf_tgt_poll_group_000", 00:15:44.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:44.524 "listen_address": { 00:15:44.524 "trtype": "TCP", 00:15:44.524 "adrfam": "IPv4", 00:15:44.524 "traddr": "10.0.0.2", 00:15:44.524 "trsvcid": "4420" 00:15:44.524 }, 00:15:44.524 "peer_address": { 00:15:44.524 "trtype": "TCP", 00:15:44.524 "adrfam": "IPv4", 00:15:44.524 "traddr": "10.0.0.1", 00:15:44.524 "trsvcid": "59812" 00:15:44.524 }, 00:15:44.524 "auth": { 00:15:44.524 "state": "completed", 00:15:44.524 "digest": "sha384", 00:15:44.524 "dhgroup": "null" 00:15:44.524 } 00:15:44.524 } 00:15:44.524 ]' 00:15:44.524 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.782 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.782 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.782 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:44.782 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.782 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.782 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.782 18:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.040 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:45.040 18:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:46.003 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.003 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:46.003 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.003 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.003 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.003 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.003 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:46.003 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.259 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.516 00:15:46.516 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.516 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.516 18:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.079 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.079 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.079 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.079 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.079 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.079 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.079 { 00:15:47.079 "cntlid": 53, 00:15:47.079 "qid": 0, 00:15:47.079 "state": "enabled", 00:15:47.079 "thread": "nvmf_tgt_poll_group_000", 00:15:47.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:47.079 "listen_address": { 00:15:47.079 "trtype": "TCP", 00:15:47.079 "adrfam": "IPv4", 00:15:47.079 "traddr": "10.0.0.2", 00:15:47.079 "trsvcid": "4420" 00:15:47.079 }, 00:15:47.079 "peer_address": { 00:15:47.079 "trtype": "TCP", 00:15:47.079 "adrfam": "IPv4", 00:15:47.079 "traddr": "10.0.0.1", 00:15:47.079 "trsvcid": "54412" 00:15:47.079 }, 00:15:47.079 "auth": { 00:15:47.079 "state": "completed", 00:15:47.079 "digest": "sha384", 00:15:47.079 "dhgroup": "null" 00:15:47.079 } 00:15:47.079 } 00:15:47.079 ]' 00:15:47.079 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.080 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.080 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.080 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.080 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.080 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.080 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.080 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.337 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:47.337 18:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:48.270 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.270 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:48.270 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.270 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.270 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.271 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.271 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.271 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.530 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.787 00:15:48.787 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.787 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.787 18:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.045 { 00:15:49.045 "cntlid": 55, 00:15:49.045 "qid": 0, 00:15:49.045 "state": "enabled", 00:15:49.045 "thread": "nvmf_tgt_poll_group_000", 00:15:49.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:49.045 "listen_address": { 00:15:49.045 "trtype": "TCP", 00:15:49.045 "adrfam": "IPv4", 00:15:49.045 "traddr": "10.0.0.2", 00:15:49.045 "trsvcid": "4420" 00:15:49.045 }, 00:15:49.045 "peer_address": { 00:15:49.045 "trtype": "TCP", 00:15:49.045 "adrfam": "IPv4", 00:15:49.045 "traddr": "10.0.0.1", 00:15:49.045 "trsvcid": "54438" 00:15:49.045 }, 00:15:49.045 "auth": { 00:15:49.045 "state": "completed", 00:15:49.045 "digest": "sha384", 00:15:49.045 "dhgroup": "null" 00:15:49.045 } 00:15:49.045 } 00:15:49.045 ]' 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.045 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.301 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.301 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.301 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.557 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:49.557 18:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:50.485 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.485 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:50.485 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.485 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.485 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.485 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.485 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.485 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.485 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.741 18:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.997 00:15:50.997 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.997 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.997 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.254 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.254 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.254 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.254 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.538 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.538 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.538 { 00:15:51.538 "cntlid": 57, 00:15:51.538 "qid": 0, 00:15:51.538 "state": "enabled", 00:15:51.538 "thread": "nvmf_tgt_poll_group_000", 00:15:51.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:51.538 "listen_address": { 00:15:51.538 "trtype": "TCP", 00:15:51.538 "adrfam": "IPv4", 00:15:51.538 "traddr": "10.0.0.2", 00:15:51.538 "trsvcid": "4420" 00:15:51.538 }, 00:15:51.538 "peer_address": { 00:15:51.538 "trtype": "TCP", 00:15:51.538 "adrfam": "IPv4", 00:15:51.538 "traddr": "10.0.0.1", 00:15:51.538 "trsvcid": "54464" 00:15:51.538 }, 00:15:51.538 "auth": { 00:15:51.538 "state": "completed", 00:15:51.538 "digest": "sha384", 00:15:51.538 "dhgroup": "ffdhe2048" 00:15:51.538 } 00:15:51.538 } 00:15:51.538 ]' 00:15:51.538 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.538 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.538 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.538 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:51.538 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.538 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.538 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.538 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.818 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:51.818 18:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:15:52.750 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.750 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:52.750 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.750 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.750 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.750 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.750 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:52.750 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.006 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.262 00:15:53.262 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.263 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.263 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.524 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.524 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.524 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.524 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.524 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.524 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.524 { 00:15:53.524 "cntlid": 59, 00:15:53.524 "qid": 0, 00:15:53.524 "state": "enabled", 00:15:53.524 "thread": "nvmf_tgt_poll_group_000", 00:15:53.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:53.524 "listen_address": { 00:15:53.524 "trtype": "TCP", 00:15:53.524 "adrfam": "IPv4", 00:15:53.524 "traddr": "10.0.0.2", 00:15:53.524 "trsvcid": "4420" 00:15:53.524 }, 00:15:53.524 "peer_address": { 00:15:53.524 "trtype": "TCP", 00:15:53.524 "adrfam": "IPv4", 00:15:53.524 "traddr": "10.0.0.1", 00:15:53.524 "trsvcid": "54488" 00:15:53.524 }, 00:15:53.524 "auth": { 00:15:53.524 "state": "completed", 00:15:53.524 "digest": "sha384", 00:15:53.524 "dhgroup": "ffdhe2048" 00:15:53.524 } 00:15:53.524 } 00:15:53.524 ]' 00:15:53.524 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.784 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.784 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.784 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.784 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.784 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.784 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.784 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.040 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:54.040 18:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:15:54.972 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.972 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:54.972 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.972 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.972 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.972 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.972 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:54.972 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.230 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.795 00:15:55.795 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.795 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.795 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.795 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.795 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.795 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.795 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.053 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.053 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.053 { 00:15:56.053 "cntlid": 61, 00:15:56.053 "qid": 0, 00:15:56.053 "state": "enabled", 00:15:56.053 "thread": "nvmf_tgt_poll_group_000", 00:15:56.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:56.053 "listen_address": { 00:15:56.053 "trtype": "TCP", 00:15:56.053 "adrfam": "IPv4", 00:15:56.053 "traddr": "10.0.0.2", 00:15:56.053 "trsvcid": "4420" 00:15:56.053 }, 00:15:56.053 "peer_address": { 00:15:56.053 "trtype": "TCP", 00:15:56.053 "adrfam": "IPv4", 00:15:56.053 "traddr": "10.0.0.1", 00:15:56.053 "trsvcid": "45264" 00:15:56.053 }, 00:15:56.053 "auth": { 00:15:56.053 "state": "completed", 00:15:56.053 "digest": "sha384", 00:15:56.053 "dhgroup": "ffdhe2048" 00:15:56.053 } 00:15:56.053 } 00:15:56.053 ]' 00:15:56.053 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.053 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.053 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.053 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.053 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.053 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.053 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.053 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.311 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:56.311 18:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:15:57.244 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.244 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.244 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.244 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.244 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.244 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.244 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.244 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.502 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:57.760 00:15:57.760 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.760 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.760 18:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.017 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.017 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.017 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.017 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.017 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.017 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.017 { 00:15:58.017 "cntlid": 63, 00:15:58.017 "qid": 0, 00:15:58.017 "state": "enabled", 00:15:58.018 "thread": "nvmf_tgt_poll_group_000", 00:15:58.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:58.018 "listen_address": { 00:15:58.018 "trtype": "TCP", 00:15:58.018 "adrfam": "IPv4", 00:15:58.018 "traddr": "10.0.0.2", 00:15:58.018 "trsvcid": "4420" 00:15:58.018 }, 00:15:58.018 "peer_address": { 00:15:58.018 "trtype": "TCP", 00:15:58.018 "adrfam": "IPv4", 00:15:58.018 "traddr": "10.0.0.1", 00:15:58.018 "trsvcid": "45300" 00:15:58.018 }, 00:15:58.018 "auth": { 00:15:58.018 "state": "completed", 00:15:58.018 "digest": "sha384", 00:15:58.018 "dhgroup": "ffdhe2048" 00:15:58.018 } 00:15:58.018 } 00:15:58.018 ]' 00:15:58.018 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.275 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.275 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.275 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.275 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.275 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.275 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.275 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.533 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:58.533 18:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:15:59.337 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.337 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.337 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.337 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.337 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.337 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.337 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.337 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:59.337 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:59.595 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:59.595 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.595 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.595 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:59.595 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:59.595 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.595 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.595 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.595 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.853 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.853 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.853 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.853 18:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.110 00:16:00.110 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.110 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.110 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.367 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.367 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.367 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.368 { 00:16:00.368 "cntlid": 65, 00:16:00.368 "qid": 0, 00:16:00.368 "state": "enabled", 00:16:00.368 "thread": "nvmf_tgt_poll_group_000", 00:16:00.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:00.368 "listen_address": { 00:16:00.368 "trtype": "TCP", 00:16:00.368 "adrfam": "IPv4", 00:16:00.368 "traddr": "10.0.0.2", 00:16:00.368 "trsvcid": "4420" 00:16:00.368 }, 00:16:00.368 "peer_address": { 00:16:00.368 "trtype": "TCP", 00:16:00.368 "adrfam": "IPv4", 00:16:00.368 "traddr": "10.0.0.1", 00:16:00.368 "trsvcid": "45338" 00:16:00.368 }, 00:16:00.368 "auth": { 00:16:00.368 "state": "completed", 00:16:00.368 "digest": "sha384", 00:16:00.368 "dhgroup": "ffdhe3072" 00:16:00.368 } 00:16:00.368 } 00:16:00.368 ]' 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.368 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.933 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:00.933 18:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:01.868 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.868 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:01.868 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.868 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.868 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.868 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.868 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.868 18:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.868 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.455 00:16:02.455 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.455 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.455 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.713 { 00:16:02.713 "cntlid": 67, 00:16:02.713 "qid": 0, 00:16:02.713 "state": "enabled", 00:16:02.713 "thread": "nvmf_tgt_poll_group_000", 00:16:02.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:02.713 "listen_address": { 00:16:02.713 "trtype": "TCP", 00:16:02.713 "adrfam": "IPv4", 00:16:02.713 "traddr": "10.0.0.2", 00:16:02.713 "trsvcid": "4420" 00:16:02.713 }, 00:16:02.713 "peer_address": { 00:16:02.713 "trtype": "TCP", 00:16:02.713 "adrfam": "IPv4", 00:16:02.713 "traddr": "10.0.0.1", 00:16:02.713 "trsvcid": "45354" 00:16:02.713 }, 00:16:02.713 "auth": { 00:16:02.713 "state": "completed", 00:16:02.713 "digest": "sha384", 00:16:02.713 "dhgroup": "ffdhe3072" 00:16:02.713 } 00:16:02.713 } 00:16:02.713 ]' 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.713 18:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.972 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:02.972 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:03.905 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.905 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:03.905 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.905 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.905 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.905 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.905 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:03.905 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.163 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.730 00:16:04.730 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.730 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.730 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.988 { 00:16:04.988 "cntlid": 69, 00:16:04.988 "qid": 0, 00:16:04.988 "state": "enabled", 00:16:04.988 "thread": "nvmf_tgt_poll_group_000", 00:16:04.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:04.988 "listen_address": { 00:16:04.988 "trtype": "TCP", 00:16:04.988 "adrfam": "IPv4", 00:16:04.988 "traddr": "10.0.0.2", 00:16:04.988 "trsvcid": "4420" 00:16:04.988 }, 00:16:04.988 "peer_address": { 00:16:04.988 "trtype": "TCP", 00:16:04.988 "adrfam": "IPv4", 00:16:04.988 "traddr": "10.0.0.1", 00:16:04.988 "trsvcid": "52954" 00:16:04.988 }, 00:16:04.988 "auth": { 00:16:04.988 "state": "completed", 00:16:04.988 "digest": "sha384", 00:16:04.988 "dhgroup": "ffdhe3072" 00:16:04.988 } 00:16:04.988 } 00:16:04.988 ]' 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.988 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.246 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:05.246 18:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:06.180 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.180 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.180 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.180 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.180 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.180 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.180 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.180 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:06.438 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.003 00:16:07.004 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.004 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.004 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.262 { 00:16:07.262 "cntlid": 71, 00:16:07.262 "qid": 0, 00:16:07.262 "state": "enabled", 00:16:07.262 "thread": "nvmf_tgt_poll_group_000", 00:16:07.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:07.262 "listen_address": { 00:16:07.262 "trtype": "TCP", 00:16:07.262 "adrfam": "IPv4", 00:16:07.262 "traddr": "10.0.0.2", 00:16:07.262 "trsvcid": "4420" 00:16:07.262 }, 00:16:07.262 "peer_address": { 00:16:07.262 "trtype": "TCP", 00:16:07.262 "adrfam": "IPv4", 00:16:07.262 "traddr": "10.0.0.1", 00:16:07.262 "trsvcid": "52990" 00:16:07.262 }, 00:16:07.262 "auth": { 00:16:07.262 "state": "completed", 00:16:07.262 "digest": "sha384", 00:16:07.262 "dhgroup": "ffdhe3072" 00:16:07.262 } 00:16:07.262 } 00:16:07.262 ]' 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.262 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.520 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:07.520 18:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:08.453 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.453 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:08.453 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.453 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.453 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.453 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.453 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.453 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:08.453 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.711 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.276 00:16:09.276 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.276 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.276 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.534 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.534 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.534 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.535 { 00:16:09.535 "cntlid": 73, 00:16:09.535 "qid": 0, 00:16:09.535 "state": "enabled", 00:16:09.535 "thread": "nvmf_tgt_poll_group_000", 00:16:09.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:09.535 "listen_address": { 00:16:09.535 "trtype": "TCP", 00:16:09.535 "adrfam": "IPv4", 00:16:09.535 "traddr": "10.0.0.2", 00:16:09.535 "trsvcid": "4420" 00:16:09.535 }, 00:16:09.535 "peer_address": { 00:16:09.535 "trtype": "TCP", 00:16:09.535 "adrfam": "IPv4", 00:16:09.535 "traddr": "10.0.0.1", 00:16:09.535 "trsvcid": "53010" 00:16:09.535 }, 00:16:09.535 "auth": { 00:16:09.535 "state": "completed", 00:16:09.535 "digest": "sha384", 00:16:09.535 "dhgroup": "ffdhe4096" 00:16:09.535 } 00:16:09.535 } 00:16:09.535 ]' 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.535 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.793 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:09.793 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:10.724 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.724 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:10.724 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.724 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.724 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.724 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.724 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.724 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.982 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.548 00:16:11.548 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.548 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.548 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.805 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.805 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.805 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.805 18:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.805 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.805 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.805 { 00:16:11.805 "cntlid": 75, 00:16:11.805 "qid": 0, 00:16:11.805 "state": "enabled", 00:16:11.805 "thread": "nvmf_tgt_poll_group_000", 00:16:11.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:11.805 "listen_address": { 00:16:11.805 "trtype": "TCP", 00:16:11.805 "adrfam": "IPv4", 00:16:11.805 "traddr": "10.0.0.2", 00:16:11.805 "trsvcid": "4420" 00:16:11.805 }, 00:16:11.805 "peer_address": { 00:16:11.805 "trtype": "TCP", 00:16:11.805 "adrfam": "IPv4", 00:16:11.805 "traddr": "10.0.0.1", 00:16:11.805 "trsvcid": "53030" 00:16:11.805 }, 00:16:11.805 "auth": { 00:16:11.805 "state": "completed", 00:16:11.805 "digest": "sha384", 00:16:11.805 "dhgroup": "ffdhe4096" 00:16:11.805 } 00:16:11.805 } 00:16:11.805 ]' 00:16:11.805 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.805 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.805 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.805 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.806 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.063 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.063 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.063 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.321 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:12.321 18:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:13.253 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.253 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:13.253 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.253 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.253 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.253 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.253 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.253 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.510 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.768 00:16:14.025 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.025 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.025 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.283 { 00:16:14.283 "cntlid": 77, 00:16:14.283 "qid": 0, 00:16:14.283 "state": "enabled", 00:16:14.283 "thread": "nvmf_tgt_poll_group_000", 00:16:14.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:14.283 "listen_address": { 00:16:14.283 "trtype": "TCP", 00:16:14.283 "adrfam": "IPv4", 00:16:14.283 "traddr": "10.0.0.2", 00:16:14.283 "trsvcid": "4420" 00:16:14.283 }, 00:16:14.283 "peer_address": { 00:16:14.283 "trtype": "TCP", 00:16:14.283 "adrfam": "IPv4", 00:16:14.283 "traddr": "10.0.0.1", 00:16:14.283 "trsvcid": "53052" 00:16:14.283 }, 00:16:14.283 "auth": { 00:16:14.283 "state": "completed", 00:16:14.283 "digest": "sha384", 00:16:14.283 "dhgroup": "ffdhe4096" 00:16:14.283 } 00:16:14.283 } 00:16:14.283 ]' 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.283 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.541 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:14.541 18:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:15.475 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.475 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:15.475 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.475 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.475 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.475 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.475 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.475 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.733 18:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:16.299 00:16:16.299 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.299 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.299 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.299 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.299 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.299 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.299 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.299 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.299 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.299 { 00:16:16.299 "cntlid": 79, 00:16:16.299 "qid": 0, 00:16:16.299 "state": "enabled", 00:16:16.299 "thread": "nvmf_tgt_poll_group_000", 00:16:16.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:16.299 "listen_address": { 00:16:16.299 "trtype": "TCP", 00:16:16.299 "adrfam": "IPv4", 00:16:16.299 "traddr": "10.0.0.2", 00:16:16.299 "trsvcid": "4420" 00:16:16.299 }, 00:16:16.299 "peer_address": { 00:16:16.299 "trtype": "TCP", 00:16:16.299 "adrfam": "IPv4", 00:16:16.299 "traddr": "10.0.0.1", 00:16:16.299 "trsvcid": "37084" 00:16:16.299 }, 00:16:16.299 "auth": { 00:16:16.299 "state": "completed", 00:16:16.299 "digest": "sha384", 00:16:16.299 "dhgroup": "ffdhe4096" 00:16:16.299 } 00:16:16.299 } 00:16:16.299 ]' 00:16:16.299 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.557 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.557 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.557 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:16.557 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.557 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.557 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.557 18:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.814 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:16.814 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:17.745 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.745 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.745 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.746 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.746 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.746 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.746 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.746 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:17.746 18:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.003 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.566 00:16:18.566 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.566 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.566 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.824 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.824 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.824 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.824 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.824 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.824 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.824 { 00:16:18.824 "cntlid": 81, 00:16:18.824 "qid": 0, 00:16:18.824 "state": "enabled", 00:16:18.824 "thread": "nvmf_tgt_poll_group_000", 00:16:18.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:18.824 "listen_address": { 00:16:18.824 "trtype": "TCP", 00:16:18.824 "adrfam": "IPv4", 00:16:18.824 "traddr": "10.0.0.2", 00:16:18.824 "trsvcid": "4420" 00:16:18.824 }, 00:16:18.824 "peer_address": { 00:16:18.824 "trtype": "TCP", 00:16:18.824 "adrfam": "IPv4", 00:16:18.824 "traddr": "10.0.0.1", 00:16:18.824 "trsvcid": "37110" 00:16:18.824 }, 00:16:18.824 "auth": { 00:16:18.824 "state": "completed", 00:16:18.824 "digest": "sha384", 00:16:18.824 "dhgroup": "ffdhe6144" 00:16:18.824 } 00:16:18.824 } 00:16:18.824 ]' 00:16:18.824 18:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.824 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.824 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.824 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.824 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.824 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.824 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.824 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.388 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:19.388 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.404 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.970 00:16:20.970 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.970 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.970 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.228 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.228 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.228 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.228 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.228 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.228 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.228 { 00:16:21.228 "cntlid": 83, 00:16:21.228 "qid": 0, 00:16:21.228 "state": "enabled", 00:16:21.228 "thread": "nvmf_tgt_poll_group_000", 00:16:21.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:21.228 "listen_address": { 00:16:21.228 "trtype": "TCP", 00:16:21.228 "adrfam": "IPv4", 00:16:21.228 "traddr": "10.0.0.2", 00:16:21.228 "trsvcid": "4420" 00:16:21.228 }, 00:16:21.228 "peer_address": { 00:16:21.228 "trtype": "TCP", 00:16:21.228 "adrfam": "IPv4", 00:16:21.228 "traddr": "10.0.0.1", 00:16:21.228 "trsvcid": "37138" 00:16:21.228 }, 00:16:21.228 "auth": { 00:16:21.228 "state": "completed", 00:16:21.228 "digest": "sha384", 00:16:21.228 "dhgroup": "ffdhe6144" 00:16:21.228 } 00:16:21.228 } 00:16:21.228 ]' 00:16:21.486 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.486 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.486 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.486 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.486 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.486 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.486 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.486 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.745 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:21.745 18:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:22.680 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.680 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:22.680 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.680 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.680 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.680 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.680 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.680 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.938 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.506 00:16:23.506 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.506 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.506 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.764 { 00:16:23.764 "cntlid": 85, 00:16:23.764 "qid": 0, 00:16:23.764 "state": "enabled", 00:16:23.764 "thread": "nvmf_tgt_poll_group_000", 00:16:23.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:23.764 "listen_address": { 00:16:23.764 "trtype": "TCP", 00:16:23.764 "adrfam": "IPv4", 00:16:23.764 "traddr": "10.0.0.2", 00:16:23.764 "trsvcid": "4420" 00:16:23.764 }, 00:16:23.764 "peer_address": { 00:16:23.764 "trtype": "TCP", 00:16:23.764 "adrfam": "IPv4", 00:16:23.764 "traddr": "10.0.0.1", 00:16:23.764 "trsvcid": "37166" 00:16:23.764 }, 00:16:23.764 "auth": { 00:16:23.764 "state": "completed", 00:16:23.764 "digest": "sha384", 00:16:23.764 "dhgroup": "ffdhe6144" 00:16:23.764 } 00:16:23.764 } 00:16:23.764 ]' 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.764 18:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.023 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:24.023 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:24.956 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.956 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:24.956 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.956 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.956 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.956 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.956 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.956 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.213 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.779 00:16:25.779 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.779 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.779 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.036 { 00:16:26.036 "cntlid": 87, 00:16:26.036 "qid": 0, 00:16:26.036 "state": "enabled", 00:16:26.036 "thread": "nvmf_tgt_poll_group_000", 00:16:26.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:26.036 "listen_address": { 00:16:26.036 "trtype": "TCP", 00:16:26.036 "adrfam": "IPv4", 00:16:26.036 "traddr": "10.0.0.2", 00:16:26.036 "trsvcid": "4420" 00:16:26.036 }, 00:16:26.036 "peer_address": { 00:16:26.036 "trtype": "TCP", 00:16:26.036 "adrfam": "IPv4", 00:16:26.036 "traddr": "10.0.0.1", 00:16:26.036 "trsvcid": "44126" 00:16:26.036 }, 00:16:26.036 "auth": { 00:16:26.036 "state": "completed", 00:16:26.036 "digest": "sha384", 00:16:26.036 "dhgroup": "ffdhe6144" 00:16:26.036 } 00:16:26.036 } 00:16:26.036 ]' 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.036 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.293 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.293 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.293 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.550 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:26.550 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:27.481 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.481 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:27.481 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.481 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.481 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.481 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.481 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.481 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:27.481 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.739 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.672 00:16:28.672 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.672 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.672 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.930 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.930 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.930 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.930 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.930 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.930 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.930 { 00:16:28.930 "cntlid": 89, 00:16:28.930 "qid": 0, 00:16:28.930 "state": "enabled", 00:16:28.930 "thread": "nvmf_tgt_poll_group_000", 00:16:28.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:28.930 "listen_address": { 00:16:28.930 "trtype": "TCP", 00:16:28.930 "adrfam": "IPv4", 00:16:28.930 "traddr": "10.0.0.2", 00:16:28.930 "trsvcid": "4420" 00:16:28.930 }, 00:16:28.930 "peer_address": { 00:16:28.930 "trtype": "TCP", 00:16:28.930 "adrfam": "IPv4", 00:16:28.930 "traddr": "10.0.0.1", 00:16:28.930 "trsvcid": "44164" 00:16:28.930 }, 00:16:28.930 "auth": { 00:16:28.930 "state": "completed", 00:16:28.930 "digest": "sha384", 00:16:28.930 "dhgroup": "ffdhe8192" 00:16:28.930 } 00:16:28.930 } 00:16:28.930 ]' 00:16:28.930 18:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.930 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.930 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.930 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.930 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.930 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.930 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.930 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.188 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:29.188 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:30.120 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.120 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:30.120 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.120 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.120 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.120 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.120 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.120 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.378 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.310 00:16:31.310 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.310 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.311 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.568 { 00:16:31.568 "cntlid": 91, 00:16:31.568 "qid": 0, 00:16:31.568 "state": "enabled", 00:16:31.568 "thread": "nvmf_tgt_poll_group_000", 00:16:31.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:31.568 "listen_address": { 00:16:31.568 "trtype": "TCP", 00:16:31.568 "adrfam": "IPv4", 00:16:31.568 "traddr": "10.0.0.2", 00:16:31.568 "trsvcid": "4420" 00:16:31.568 }, 00:16:31.568 "peer_address": { 00:16:31.568 "trtype": "TCP", 00:16:31.568 "adrfam": "IPv4", 00:16:31.568 "traddr": "10.0.0.1", 00:16:31.568 "trsvcid": "44186" 00:16:31.568 }, 00:16:31.568 "auth": { 00:16:31.568 "state": "completed", 00:16:31.568 "digest": "sha384", 00:16:31.568 "dhgroup": "ffdhe8192" 00:16:31.568 } 00:16:31.568 } 00:16:31.568 ]' 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.568 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.826 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:31.826 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:32.759 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.759 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.759 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.759 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.759 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.759 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.759 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.760 18:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.018 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.951 00:16:33.951 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.951 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.951 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.209 { 00:16:34.209 "cntlid": 93, 00:16:34.209 "qid": 0, 00:16:34.209 "state": "enabled", 00:16:34.209 "thread": "nvmf_tgt_poll_group_000", 00:16:34.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:34.209 "listen_address": { 00:16:34.209 "trtype": "TCP", 00:16:34.209 "adrfam": "IPv4", 00:16:34.209 "traddr": "10.0.0.2", 00:16:34.209 "trsvcid": "4420" 00:16:34.209 }, 00:16:34.209 "peer_address": { 00:16:34.209 "trtype": "TCP", 00:16:34.209 "adrfam": "IPv4", 00:16:34.209 "traddr": "10.0.0.1", 00:16:34.209 "trsvcid": "44202" 00:16:34.209 }, 00:16:34.209 "auth": { 00:16:34.209 "state": "completed", 00:16:34.209 "digest": "sha384", 00:16:34.209 "dhgroup": "ffdhe8192" 00:16:34.209 } 00:16:34.209 } 00:16:34.209 ]' 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.209 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.467 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:34.467 18:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:35.400 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.657 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:35.657 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.657 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.657 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.657 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.657 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:35.657 18:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.915 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.847 00:16:36.847 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.847 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.847 18:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.106 { 00:16:37.106 "cntlid": 95, 00:16:37.106 "qid": 0, 00:16:37.106 "state": "enabled", 00:16:37.106 "thread": "nvmf_tgt_poll_group_000", 00:16:37.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:37.106 "listen_address": { 00:16:37.106 "trtype": "TCP", 00:16:37.106 "adrfam": "IPv4", 00:16:37.106 "traddr": "10.0.0.2", 00:16:37.106 "trsvcid": "4420" 00:16:37.106 }, 00:16:37.106 "peer_address": { 00:16:37.106 "trtype": "TCP", 00:16:37.106 "adrfam": "IPv4", 00:16:37.106 "traddr": "10.0.0.1", 00:16:37.106 "trsvcid": "57294" 00:16:37.106 }, 00:16:37.106 "auth": { 00:16:37.106 "state": "completed", 00:16:37.106 "digest": "sha384", 00:16:37.106 "dhgroup": "ffdhe8192" 00:16:37.106 } 00:16:37.106 } 00:16:37.106 ]' 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.106 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.367 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:37.367 18:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:38.304 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.304 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:38.304 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.304 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.304 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.304 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:38.304 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.304 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.304 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.304 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.562 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:38.562 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.562 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.562 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.562 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.562 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.562 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.562 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.562 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.820 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.820 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.820 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.820 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.078 00:16:39.078 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.078 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.078 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.335 { 00:16:39.335 "cntlid": 97, 00:16:39.335 "qid": 0, 00:16:39.335 "state": "enabled", 00:16:39.335 "thread": "nvmf_tgt_poll_group_000", 00:16:39.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:39.335 "listen_address": { 00:16:39.335 "trtype": "TCP", 00:16:39.335 "adrfam": "IPv4", 00:16:39.335 "traddr": "10.0.0.2", 00:16:39.335 "trsvcid": "4420" 00:16:39.335 }, 00:16:39.335 "peer_address": { 00:16:39.335 "trtype": "TCP", 00:16:39.335 "adrfam": "IPv4", 00:16:39.335 "traddr": "10.0.0.1", 00:16:39.335 "trsvcid": "57324" 00:16:39.335 }, 00:16:39.335 "auth": { 00:16:39.335 "state": "completed", 00:16:39.335 "digest": "sha512", 00:16:39.335 "dhgroup": "null" 00:16:39.335 } 00:16:39.335 } 00:16:39.335 ]' 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.335 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.901 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:39.901 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:40.834 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.834 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:40.834 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.834 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.834 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.834 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.834 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:40.834 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.834 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.400 00:16:41.400 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.400 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.400 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.400 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.400 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.400 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.400 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.400 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.400 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.400 { 00:16:41.400 "cntlid": 99, 00:16:41.400 "qid": 0, 00:16:41.400 "state": "enabled", 00:16:41.400 "thread": "nvmf_tgt_poll_group_000", 00:16:41.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:41.400 "listen_address": { 00:16:41.400 "trtype": "TCP", 00:16:41.400 "adrfam": "IPv4", 00:16:41.400 "traddr": "10.0.0.2", 00:16:41.400 "trsvcid": "4420" 00:16:41.400 }, 00:16:41.400 "peer_address": { 00:16:41.400 "trtype": "TCP", 00:16:41.400 "adrfam": "IPv4", 00:16:41.400 "traddr": "10.0.0.1", 00:16:41.400 "trsvcid": "57344" 00:16:41.400 }, 00:16:41.400 "auth": { 00:16:41.400 "state": "completed", 00:16:41.400 "digest": "sha512", 00:16:41.400 "dhgroup": "null" 00:16:41.400 } 00:16:41.400 } 00:16:41.400 ]' 00:16:41.400 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.659 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.659 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.659 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:41.659 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.659 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.659 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.659 18:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.917 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:41.917 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:42.850 18:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.850 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:42.850 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.850 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.850 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.850 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.850 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:42.850 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.108 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.366 00:16:43.366 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.366 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.366 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.931 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.931 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.931 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.931 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.931 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.931 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.931 { 00:16:43.931 "cntlid": 101, 00:16:43.931 "qid": 0, 00:16:43.931 "state": "enabled", 00:16:43.931 "thread": "nvmf_tgt_poll_group_000", 00:16:43.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:43.931 "listen_address": { 00:16:43.931 "trtype": "TCP", 00:16:43.931 "adrfam": "IPv4", 00:16:43.931 "traddr": "10.0.0.2", 00:16:43.931 "trsvcid": "4420" 00:16:43.931 }, 00:16:43.931 "peer_address": { 00:16:43.931 "trtype": "TCP", 00:16:43.931 "adrfam": "IPv4", 00:16:43.931 "traddr": "10.0.0.1", 00:16:43.931 "trsvcid": "57380" 00:16:43.931 }, 00:16:43.931 "auth": { 00:16:43.931 "state": "completed", 00:16:43.931 "digest": "sha512", 00:16:43.931 "dhgroup": "null" 00:16:43.931 } 00:16:43.931 } 00:16:43.931 ]' 00:16:43.931 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.931 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.931 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.931 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.931 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.931 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.931 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.931 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.189 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:44.189 18:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:45.122 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.122 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:45.122 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.122 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.122 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.122 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.122 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:45.122 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.380 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.638 00:16:45.638 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.638 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.638 18:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.896 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.896 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.896 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.896 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.896 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.896 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.896 { 00:16:45.896 "cntlid": 103, 00:16:45.896 "qid": 0, 00:16:45.896 "state": "enabled", 00:16:45.896 "thread": "nvmf_tgt_poll_group_000", 00:16:45.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:45.896 "listen_address": { 00:16:45.896 "trtype": "TCP", 00:16:45.896 "adrfam": "IPv4", 00:16:45.896 "traddr": "10.0.0.2", 00:16:45.896 "trsvcid": "4420" 00:16:45.896 }, 00:16:45.896 "peer_address": { 00:16:45.896 "trtype": "TCP", 00:16:45.896 "adrfam": "IPv4", 00:16:45.896 "traddr": "10.0.0.1", 00:16:45.896 "trsvcid": "52734" 00:16:45.896 }, 00:16:45.896 "auth": { 00:16:45.896 "state": "completed", 00:16:45.896 "digest": "sha512", 00:16:45.896 "dhgroup": "null" 00:16:45.896 } 00:16:45.896 } 00:16:45.896 ]' 00:16:45.896 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.154 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.154 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.154 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:46.154 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.154 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.154 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.154 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.411 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:46.412 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:47.343 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.343 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:47.343 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.343 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.343 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.343 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.343 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.343 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:47.343 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.601 18:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.859 00:16:48.117 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.117 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.117 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.375 { 00:16:48.375 "cntlid": 105, 00:16:48.375 "qid": 0, 00:16:48.375 "state": "enabled", 00:16:48.375 "thread": "nvmf_tgt_poll_group_000", 00:16:48.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:48.375 "listen_address": { 00:16:48.375 "trtype": "TCP", 00:16:48.375 "adrfam": "IPv4", 00:16:48.375 "traddr": "10.0.0.2", 00:16:48.375 "trsvcid": "4420" 00:16:48.375 }, 00:16:48.375 "peer_address": { 00:16:48.375 "trtype": "TCP", 00:16:48.375 "adrfam": "IPv4", 00:16:48.375 "traddr": "10.0.0.1", 00:16:48.375 "trsvcid": "52758" 00:16:48.375 }, 00:16:48.375 "auth": { 00:16:48.375 "state": "completed", 00:16:48.375 "digest": "sha512", 00:16:48.375 "dhgroup": "ffdhe2048" 00:16:48.375 } 00:16:48.375 } 00:16:48.375 ]' 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.375 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.633 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:48.633 18:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:49.566 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.566 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:49.566 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.566 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.566 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.566 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.566 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:49.566 18:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.825 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.390 00:16:50.390 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.390 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.390 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.653 { 00:16:50.653 "cntlid": 107, 00:16:50.653 "qid": 0, 00:16:50.653 "state": "enabled", 00:16:50.653 "thread": "nvmf_tgt_poll_group_000", 00:16:50.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:50.653 "listen_address": { 00:16:50.653 "trtype": "TCP", 00:16:50.653 "adrfam": "IPv4", 00:16:50.653 "traddr": "10.0.0.2", 00:16:50.653 "trsvcid": "4420" 00:16:50.653 }, 00:16:50.653 "peer_address": { 00:16:50.653 "trtype": "TCP", 00:16:50.653 "adrfam": "IPv4", 00:16:50.653 "traddr": "10.0.0.1", 00:16:50.653 "trsvcid": "52792" 00:16:50.653 }, 00:16:50.653 "auth": { 00:16:50.653 "state": "completed", 00:16:50.653 "digest": "sha512", 00:16:50.653 "dhgroup": "ffdhe2048" 00:16:50.653 } 00:16:50.653 } 00:16:50.653 ]' 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.653 18:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.963 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:50.963 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:51.921 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.921 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:51.921 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.921 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.921 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.921 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.921 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:51.921 18:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.179 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.436 00:16:52.436 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.436 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.436 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.694 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.694 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.694 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.694 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.694 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.694 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.694 { 00:16:52.694 "cntlid": 109, 00:16:52.694 "qid": 0, 00:16:52.694 "state": "enabled", 00:16:52.694 "thread": "nvmf_tgt_poll_group_000", 00:16:52.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:52.694 "listen_address": { 00:16:52.694 "trtype": "TCP", 00:16:52.694 "adrfam": "IPv4", 00:16:52.694 "traddr": "10.0.0.2", 00:16:52.694 "trsvcid": "4420" 00:16:52.694 }, 00:16:52.694 "peer_address": { 00:16:52.694 "trtype": "TCP", 00:16:52.694 "adrfam": "IPv4", 00:16:52.694 "traddr": "10.0.0.1", 00:16:52.694 "trsvcid": "52814" 00:16:52.694 }, 00:16:52.694 "auth": { 00:16:52.694 "state": "completed", 00:16:52.694 "digest": "sha512", 00:16:52.694 "dhgroup": "ffdhe2048" 00:16:52.694 } 00:16:52.694 } 00:16:52.694 ]' 00:16:52.694 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.694 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.694 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.952 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:52.952 18:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.952 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.952 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.952 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.210 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:53.210 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:16:54.142 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.142 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.142 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.142 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.142 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.142 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.142 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.142 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.400 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.658 00:16:54.658 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.658 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.658 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.916 { 00:16:54.916 "cntlid": 111, 00:16:54.916 "qid": 0, 00:16:54.916 "state": "enabled", 00:16:54.916 "thread": "nvmf_tgt_poll_group_000", 00:16:54.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:54.916 "listen_address": { 00:16:54.916 "trtype": "TCP", 00:16:54.916 "adrfam": "IPv4", 00:16:54.916 "traddr": "10.0.0.2", 00:16:54.916 "trsvcid": "4420" 00:16:54.916 }, 00:16:54.916 "peer_address": { 00:16:54.916 "trtype": "TCP", 00:16:54.916 "adrfam": "IPv4", 00:16:54.916 "traddr": "10.0.0.1", 00:16:54.916 "trsvcid": "34742" 00:16:54.916 }, 00:16:54.916 "auth": { 00:16:54.916 "state": "completed", 00:16:54.916 "digest": "sha512", 00:16:54.916 "dhgroup": "ffdhe2048" 00:16:54.916 } 00:16:54.916 } 00:16:54.916 ]' 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.916 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.174 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.174 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.174 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.432 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:55.432 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.364 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.622 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:56.622 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.622 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.622 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.622 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.622 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.622 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.622 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.622 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.622 18:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.879 00:16:56.879 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.879 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.879 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.137 { 00:16:57.137 "cntlid": 113, 00:16:57.137 "qid": 0, 00:16:57.137 "state": "enabled", 00:16:57.137 "thread": "nvmf_tgt_poll_group_000", 00:16:57.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:57.137 "listen_address": { 00:16:57.137 "trtype": "TCP", 00:16:57.137 "adrfam": "IPv4", 00:16:57.137 "traddr": "10.0.0.2", 00:16:57.137 "trsvcid": "4420" 00:16:57.137 }, 00:16:57.137 "peer_address": { 00:16:57.137 "trtype": "TCP", 00:16:57.137 "adrfam": "IPv4", 00:16:57.137 "traddr": "10.0.0.1", 00:16:57.137 "trsvcid": "34762" 00:16:57.137 }, 00:16:57.137 "auth": { 00:16:57.137 "state": "completed", 00:16:57.137 "digest": "sha512", 00:16:57.137 "dhgroup": "ffdhe3072" 00:16:57.137 } 00:16:57.137 } 00:16:57.137 ]' 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.137 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.396 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.396 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.396 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.653 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:57.653 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:16:58.586 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.586 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:58.586 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.586 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.586 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.586 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.586 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:58.586 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.845 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.103 00:16:59.103 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.103 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.103 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.363 { 00:16:59.363 "cntlid": 115, 00:16:59.363 "qid": 0, 00:16:59.363 "state": "enabled", 00:16:59.363 "thread": "nvmf_tgt_poll_group_000", 00:16:59.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:59.363 "listen_address": { 00:16:59.363 "trtype": "TCP", 00:16:59.363 "adrfam": "IPv4", 00:16:59.363 "traddr": "10.0.0.2", 00:16:59.363 "trsvcid": "4420" 00:16:59.363 }, 00:16:59.363 "peer_address": { 00:16:59.363 "trtype": "TCP", 00:16:59.363 "adrfam": "IPv4", 00:16:59.363 "traddr": "10.0.0.1", 00:16:59.363 "trsvcid": "34788" 00:16:59.363 }, 00:16:59.363 "auth": { 00:16:59.363 "state": "completed", 00:16:59.363 "digest": "sha512", 00:16:59.363 "dhgroup": "ffdhe3072" 00:16:59.363 } 00:16:59.363 } 00:16:59.363 ]' 00:16:59.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.363 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.621 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.621 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.621 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.621 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.621 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.884 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:16:59.884 18:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:17:00.829 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.829 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:00.829 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.829 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.829 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.829 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.829 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.829 18:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.087 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.346 00:17:01.346 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.346 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.346 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.604 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.604 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.604 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.604 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.604 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.604 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.604 { 00:17:01.604 "cntlid": 117, 00:17:01.604 "qid": 0, 00:17:01.604 "state": "enabled", 00:17:01.604 "thread": "nvmf_tgt_poll_group_000", 00:17:01.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:01.604 "listen_address": { 00:17:01.604 "trtype": "TCP", 00:17:01.604 "adrfam": "IPv4", 00:17:01.604 "traddr": "10.0.0.2", 00:17:01.604 "trsvcid": "4420" 00:17:01.604 }, 00:17:01.604 "peer_address": { 00:17:01.604 "trtype": "TCP", 00:17:01.604 "adrfam": "IPv4", 00:17:01.604 "traddr": "10.0.0.1", 00:17:01.604 "trsvcid": "34826" 00:17:01.604 }, 00:17:01.604 "auth": { 00:17:01.604 "state": "completed", 00:17:01.604 "digest": "sha512", 00:17:01.604 "dhgroup": "ffdhe3072" 00:17:01.604 } 00:17:01.604 } 00:17:01.604 ]' 00:17:01.604 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.604 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.604 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.862 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.862 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.862 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.862 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.862 18:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.119 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:17:02.119 18:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:17:03.053 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.053 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:03.053 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.053 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.053 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.053 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.053 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.053 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.311 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:03.570 00:17:03.570 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.570 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.570 18:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.829 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.829 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.829 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.829 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.829 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.829 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.829 { 00:17:03.829 "cntlid": 119, 00:17:03.829 "qid": 0, 00:17:03.829 "state": "enabled", 00:17:03.829 "thread": "nvmf_tgt_poll_group_000", 00:17:03.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:03.829 "listen_address": { 00:17:03.829 "trtype": "TCP", 00:17:03.829 "adrfam": "IPv4", 00:17:03.829 "traddr": "10.0.0.2", 00:17:03.829 "trsvcid": "4420" 00:17:03.829 }, 00:17:03.829 "peer_address": { 00:17:03.829 "trtype": "TCP", 00:17:03.829 "adrfam": "IPv4", 00:17:03.829 "traddr": "10.0.0.1", 00:17:03.829 "trsvcid": "34850" 00:17:03.829 }, 00:17:03.829 "auth": { 00:17:03.829 "state": "completed", 00:17:03.829 "digest": "sha512", 00:17:03.829 "dhgroup": "ffdhe3072" 00:17:03.829 } 00:17:03.829 } 00:17:03.829 ]' 00:17:03.829 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.087 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.087 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.087 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.087 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.087 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.088 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.088 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.346 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:04.346 18:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:05.279 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.279 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:05.279 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.279 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.279 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.279 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.279 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.279 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.279 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.537 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.795 00:17:06.053 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.053 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.053 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.311 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.311 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.311 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.312 { 00:17:06.312 "cntlid": 121, 00:17:06.312 "qid": 0, 00:17:06.312 "state": "enabled", 00:17:06.312 "thread": "nvmf_tgt_poll_group_000", 00:17:06.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:06.312 "listen_address": { 00:17:06.312 "trtype": "TCP", 00:17:06.312 "adrfam": "IPv4", 00:17:06.312 "traddr": "10.0.0.2", 00:17:06.312 "trsvcid": "4420" 00:17:06.312 }, 00:17:06.312 "peer_address": { 00:17:06.312 "trtype": "TCP", 00:17:06.312 "adrfam": "IPv4", 00:17:06.312 "traddr": "10.0.0.1", 00:17:06.312 "trsvcid": "37948" 00:17:06.312 }, 00:17:06.312 "auth": { 00:17:06.312 "state": "completed", 00:17:06.312 "digest": "sha512", 00:17:06.312 "dhgroup": "ffdhe4096" 00:17:06.312 } 00:17:06.312 } 00:17:06.312 ]' 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.312 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.569 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:17:06.569 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:17:07.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:07.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:07.503 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.761 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.325 00:17:08.325 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.325 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.325 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.583 { 00:17:08.583 "cntlid": 123, 00:17:08.583 "qid": 0, 00:17:08.583 "state": "enabled", 00:17:08.583 "thread": "nvmf_tgt_poll_group_000", 00:17:08.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:08.583 "listen_address": { 00:17:08.583 "trtype": "TCP", 00:17:08.583 "adrfam": "IPv4", 00:17:08.583 "traddr": "10.0.0.2", 00:17:08.583 "trsvcid": "4420" 00:17:08.583 }, 00:17:08.583 "peer_address": { 00:17:08.583 "trtype": "TCP", 00:17:08.583 "adrfam": "IPv4", 00:17:08.583 "traddr": "10.0.0.1", 00:17:08.583 "trsvcid": "37996" 00:17:08.583 }, 00:17:08.583 "auth": { 00:17:08.583 "state": "completed", 00:17:08.583 "digest": "sha512", 00:17:08.583 "dhgroup": "ffdhe4096" 00:17:08.583 } 00:17:08.583 } 00:17:08.583 ]' 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.583 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.841 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:17:08.841 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:17:09.773 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.773 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:09.773 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.773 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.773 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.773 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.773 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:09.773 18:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.031 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.594 00:17:10.594 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.594 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.594 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.852 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.852 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.852 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.852 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.852 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.852 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.852 { 00:17:10.852 "cntlid": 125, 00:17:10.852 "qid": 0, 00:17:10.852 "state": "enabled", 00:17:10.852 "thread": "nvmf_tgt_poll_group_000", 00:17:10.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:10.852 "listen_address": { 00:17:10.852 "trtype": "TCP", 00:17:10.852 "adrfam": "IPv4", 00:17:10.852 "traddr": "10.0.0.2", 00:17:10.852 "trsvcid": "4420" 00:17:10.852 }, 00:17:10.852 "peer_address": { 00:17:10.852 "trtype": "TCP", 00:17:10.852 "adrfam": "IPv4", 00:17:10.852 "traddr": "10.0.0.1", 00:17:10.852 "trsvcid": "38032" 00:17:10.852 }, 00:17:10.852 "auth": { 00:17:10.852 "state": "completed", 00:17:10.852 "digest": "sha512", 00:17:10.852 "dhgroup": "ffdhe4096" 00:17:10.852 } 00:17:10.852 } 00:17:10.852 ]' 00:17:10.852 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.852 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.852 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.852 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.852 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.852 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.852 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.852 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.110 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:17:11.110 18:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:17:12.044 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.044 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:12.044 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.044 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.044 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.044 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.044 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:12.044 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:12.302 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:12.302 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.302 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.302 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:12.302 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.302 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.302 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:12.302 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.302 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.303 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.303 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.303 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.303 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.867 00:17:12.867 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.867 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.867 18:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.126 { 00:17:13.126 "cntlid": 127, 00:17:13.126 "qid": 0, 00:17:13.126 "state": "enabled", 00:17:13.126 "thread": "nvmf_tgt_poll_group_000", 00:17:13.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:13.126 "listen_address": { 00:17:13.126 "trtype": "TCP", 00:17:13.126 "adrfam": "IPv4", 00:17:13.126 "traddr": "10.0.0.2", 00:17:13.126 "trsvcid": "4420" 00:17:13.126 }, 00:17:13.126 "peer_address": { 00:17:13.126 "trtype": "TCP", 00:17:13.126 "adrfam": "IPv4", 00:17:13.126 "traddr": "10.0.0.1", 00:17:13.126 "trsvcid": "38068" 00:17:13.126 }, 00:17:13.126 "auth": { 00:17:13.126 "state": "completed", 00:17:13.126 "digest": "sha512", 00:17:13.126 "dhgroup": "ffdhe4096" 00:17:13.126 } 00:17:13.126 } 00:17:13.126 ]' 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.126 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.384 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:13.384 18:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:14.318 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.318 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.318 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.318 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.318 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.318 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.318 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.318 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.318 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.576 18:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.141 00:17:15.141 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.141 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.141 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.399 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.399 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.399 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.399 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.399 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.399 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.399 { 00:17:15.399 "cntlid": 129, 00:17:15.399 "qid": 0, 00:17:15.399 "state": "enabled", 00:17:15.399 "thread": "nvmf_tgt_poll_group_000", 00:17:15.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:15.399 "listen_address": { 00:17:15.399 "trtype": "TCP", 00:17:15.399 "adrfam": "IPv4", 00:17:15.399 "traddr": "10.0.0.2", 00:17:15.399 "trsvcid": "4420" 00:17:15.399 }, 00:17:15.399 "peer_address": { 00:17:15.399 "trtype": "TCP", 00:17:15.399 "adrfam": "IPv4", 00:17:15.399 "traddr": "10.0.0.1", 00:17:15.399 "trsvcid": "44112" 00:17:15.399 }, 00:17:15.399 "auth": { 00:17:15.399 "state": "completed", 00:17:15.399 "digest": "sha512", 00:17:15.399 "dhgroup": "ffdhe6144" 00:17:15.399 } 00:17:15.399 } 00:17:15.399 ]' 00:17:15.399 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.399 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.399 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.657 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.657 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.657 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.657 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.657 18:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.914 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:17:15.914 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:17:16.883 18:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.883 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:16.883 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.883 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.883 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.883 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.883 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:16.883 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.141 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.707 00:17:17.707 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.707 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.707 18:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.964 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.964 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.964 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.964 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.964 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.964 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.964 { 00:17:17.964 "cntlid": 131, 00:17:17.964 "qid": 0, 00:17:17.965 "state": "enabled", 00:17:17.965 "thread": "nvmf_tgt_poll_group_000", 00:17:17.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:17.965 "listen_address": { 00:17:17.965 "trtype": "TCP", 00:17:17.965 "adrfam": "IPv4", 00:17:17.965 "traddr": "10.0.0.2", 00:17:17.965 "trsvcid": "4420" 00:17:17.965 }, 00:17:17.965 "peer_address": { 00:17:17.965 "trtype": "TCP", 00:17:17.965 "adrfam": "IPv4", 00:17:17.965 "traddr": "10.0.0.1", 00:17:17.965 "trsvcid": "44150" 00:17:17.965 }, 00:17:17.965 "auth": { 00:17:17.965 "state": "completed", 00:17:17.965 "digest": "sha512", 00:17:17.965 "dhgroup": "ffdhe6144" 00:17:17.965 } 00:17:17.965 } 00:17:17.965 ]' 00:17:17.965 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.965 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.965 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.222 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.222 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.223 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.223 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.223 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.480 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:17:18.481 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:17:19.415 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.415 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.415 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.415 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.415 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.415 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.415 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.415 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.672 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.247 00:17:20.247 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.247 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.247 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.525 { 00:17:20.525 "cntlid": 133, 00:17:20.525 "qid": 0, 00:17:20.525 "state": "enabled", 00:17:20.525 "thread": "nvmf_tgt_poll_group_000", 00:17:20.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:20.525 "listen_address": { 00:17:20.525 "trtype": "TCP", 00:17:20.525 "adrfam": "IPv4", 00:17:20.525 "traddr": "10.0.0.2", 00:17:20.525 "trsvcid": "4420" 00:17:20.525 }, 00:17:20.525 "peer_address": { 00:17:20.525 "trtype": "TCP", 00:17:20.525 "adrfam": "IPv4", 00:17:20.525 "traddr": "10.0.0.1", 00:17:20.525 "trsvcid": "44170" 00:17:20.525 }, 00:17:20.525 "auth": { 00:17:20.525 "state": "completed", 00:17:20.525 "digest": "sha512", 00:17:20.525 "dhgroup": "ffdhe6144" 00:17:20.525 } 00:17:20.525 } 00:17:20.525 ]' 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.525 18:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.783 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:17:20.783 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:17:21.716 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.716 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:21.716 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.716 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.716 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.716 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.716 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:21.716 18:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.282 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.539 00:17:22.539 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.539 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.539 18:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.106 { 00:17:23.106 "cntlid": 135, 00:17:23.106 "qid": 0, 00:17:23.106 "state": "enabled", 00:17:23.106 "thread": "nvmf_tgt_poll_group_000", 00:17:23.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:23.106 "listen_address": { 00:17:23.106 "trtype": "TCP", 00:17:23.106 "adrfam": "IPv4", 00:17:23.106 "traddr": "10.0.0.2", 00:17:23.106 "trsvcid": "4420" 00:17:23.106 }, 00:17:23.106 "peer_address": { 00:17:23.106 "trtype": "TCP", 00:17:23.106 "adrfam": "IPv4", 00:17:23.106 "traddr": "10.0.0.1", 00:17:23.106 "trsvcid": "44198" 00:17:23.106 }, 00:17:23.106 "auth": { 00:17:23.106 "state": "completed", 00:17:23.106 "digest": "sha512", 00:17:23.106 "dhgroup": "ffdhe6144" 00:17:23.106 } 00:17:23.106 } 00:17:23.106 ]' 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.106 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.364 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:23.364 18:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:24.297 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.297 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:24.297 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.297 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.297 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.297 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.297 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.297 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.297 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.555 18:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.488 00:17:25.488 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.488 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.488 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.488 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.488 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.488 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.488 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.488 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.488 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.488 { 00:17:25.488 "cntlid": 137, 00:17:25.488 "qid": 0, 00:17:25.488 "state": "enabled", 00:17:25.488 "thread": "nvmf_tgt_poll_group_000", 00:17:25.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:25.488 "listen_address": { 00:17:25.488 "trtype": "TCP", 00:17:25.488 "adrfam": "IPv4", 00:17:25.488 "traddr": "10.0.0.2", 00:17:25.488 "trsvcid": "4420" 00:17:25.488 }, 00:17:25.488 "peer_address": { 00:17:25.488 "trtype": "TCP", 00:17:25.488 "adrfam": "IPv4", 00:17:25.488 "traddr": "10.0.0.1", 00:17:25.488 "trsvcid": "33336" 00:17:25.488 }, 00:17:25.488 "auth": { 00:17:25.488 "state": "completed", 00:17:25.488 "digest": "sha512", 00:17:25.488 "dhgroup": "ffdhe8192" 00:17:25.488 } 00:17:25.488 } 00:17:25.488 ]' 00:17:25.488 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.746 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.746 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.746 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.746 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.746 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.746 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.746 18:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.004 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:17:26.004 18:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:17:26.936 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.936 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:26.936 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.936 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.936 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.936 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.936 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:26.936 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:27.194 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:27.194 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.194 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.194 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:27.194 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.194 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.195 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.195 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.195 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.195 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.195 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.195 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.195 18:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.129 00:17:28.129 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.129 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.129 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.387 { 00:17:28.387 "cntlid": 139, 00:17:28.387 "qid": 0, 00:17:28.387 "state": "enabled", 00:17:28.387 "thread": "nvmf_tgt_poll_group_000", 00:17:28.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:28.387 "listen_address": { 00:17:28.387 "trtype": "TCP", 00:17:28.387 "adrfam": "IPv4", 00:17:28.387 "traddr": "10.0.0.2", 00:17:28.387 "trsvcid": "4420" 00:17:28.387 }, 00:17:28.387 "peer_address": { 00:17:28.387 "trtype": "TCP", 00:17:28.387 "adrfam": "IPv4", 00:17:28.387 "traddr": "10.0.0.1", 00:17:28.387 "trsvcid": "33368" 00:17:28.387 }, 00:17:28.387 "auth": { 00:17:28.387 "state": "completed", 00:17:28.387 "digest": "sha512", 00:17:28.387 "dhgroup": "ffdhe8192" 00:17:28.387 } 00:17:28.387 } 00:17:28.387 ]' 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.387 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.645 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:17:28.645 18:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: --dhchap-ctrl-secret DHHC-1:02:NGQ4MWVmYTk3MzEyZWNkZWY0YTQzMmU1OTkyODM1ODc0YWJkOTg2ZDY1M2ZiMjdkGlPbpQ==: 00:17:29.579 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.579 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:29.579 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.579 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.579 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.579 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.579 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.579 18:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.837 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.769 00:17:30.769 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.769 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.769 18:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.027 { 00:17:31.027 "cntlid": 141, 00:17:31.027 "qid": 0, 00:17:31.027 "state": "enabled", 00:17:31.027 "thread": "nvmf_tgt_poll_group_000", 00:17:31.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:31.027 "listen_address": { 00:17:31.027 "trtype": "TCP", 00:17:31.027 "adrfam": "IPv4", 00:17:31.027 "traddr": "10.0.0.2", 00:17:31.027 "trsvcid": "4420" 00:17:31.027 }, 00:17:31.027 "peer_address": { 00:17:31.027 "trtype": "TCP", 00:17:31.027 "adrfam": "IPv4", 00:17:31.027 "traddr": "10.0.0.1", 00:17:31.027 "trsvcid": "33394" 00:17:31.027 }, 00:17:31.027 "auth": { 00:17:31.027 "state": "completed", 00:17:31.027 "digest": "sha512", 00:17:31.027 "dhgroup": "ffdhe8192" 00:17:31.027 } 00:17:31.027 } 00:17:31.027 ]' 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.027 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.285 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:17:31.285 18:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:01:ZGJiMmNhYjM1NzRiOGJjZmYzZjI2ZDM0OWFhNDFmMmQrhnME: 00:17:32.217 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.217 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.217 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.217 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.217 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.217 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.217 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.217 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.476 18:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.409 00:17:33.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.409 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.667 { 00:17:33.667 "cntlid": 143, 00:17:33.667 "qid": 0, 00:17:33.667 "state": "enabled", 00:17:33.667 "thread": "nvmf_tgt_poll_group_000", 00:17:33.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:33.667 "listen_address": { 00:17:33.667 "trtype": "TCP", 00:17:33.667 "adrfam": "IPv4", 00:17:33.667 "traddr": "10.0.0.2", 00:17:33.667 "trsvcid": "4420" 00:17:33.667 }, 00:17:33.667 "peer_address": { 00:17:33.667 "trtype": "TCP", 00:17:33.667 "adrfam": "IPv4", 00:17:33.667 "traddr": "10.0.0.1", 00:17:33.667 "trsvcid": "33406" 00:17:33.667 }, 00:17:33.667 "auth": { 00:17:33.667 "state": "completed", 00:17:33.667 "digest": "sha512", 00:17:33.667 "dhgroup": "ffdhe8192" 00:17:33.667 } 00:17:33.667 } 00:17:33.667 ]' 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.667 18:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.232 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:34.232 18:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:34.796 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.796 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:34.796 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.796 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.054 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.054 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:35.054 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:35.054 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:35.054 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.054 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.054 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.312 18:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.246 00:17:36.246 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.246 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.246 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.246 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.246 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.246 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.246 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.246 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.246 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.246 { 00:17:36.246 "cntlid": 145, 00:17:36.246 "qid": 0, 00:17:36.246 "state": "enabled", 00:17:36.246 "thread": "nvmf_tgt_poll_group_000", 00:17:36.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:36.246 "listen_address": { 00:17:36.246 "trtype": "TCP", 00:17:36.246 "adrfam": "IPv4", 00:17:36.246 "traddr": "10.0.0.2", 00:17:36.246 "trsvcid": "4420" 00:17:36.246 }, 00:17:36.246 "peer_address": { 00:17:36.246 "trtype": "TCP", 00:17:36.246 "adrfam": "IPv4", 00:17:36.246 "traddr": "10.0.0.1", 00:17:36.246 "trsvcid": "39796" 00:17:36.246 }, 00:17:36.246 "auth": { 00:17:36.246 "state": "completed", 00:17:36.246 "digest": "sha512", 00:17:36.246 "dhgroup": "ffdhe8192" 00:17:36.246 } 00:17:36.246 } 00:17:36.246 ]' 00:17:36.246 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.505 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.505 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.505 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.505 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.505 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.505 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.505 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.763 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:17:36.763 18:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZmEzODIwZmRmOTlmMWU3MTJhYzEwYWUzNTUyNTdkYWY4MjZlM2ZiYTc5MDgxZDhkL80ZlA==: --dhchap-ctrl-secret DHHC-1:03:ZTc5MzgzYWVjNDdmMzkwZjU2MjZkNGU4YWU2NjE3YmI3ODkxZTU1NjNiYjE5ZWQzNWZlYWRkOGVhOGJiOTQ3NgT6L6k=: 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:37.696 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:38.631 request: 00:17:38.631 { 00:17:38.631 "name": "nvme0", 00:17:38.631 "trtype": "tcp", 00:17:38.631 "traddr": "10.0.0.2", 00:17:38.631 "adrfam": "ipv4", 00:17:38.631 "trsvcid": "4420", 00:17:38.631 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:38.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:38.631 "prchk_reftag": false, 00:17:38.631 "prchk_guard": false, 00:17:38.631 "hdgst": false, 00:17:38.631 "ddgst": false, 00:17:38.631 "dhchap_key": "key2", 00:17:38.631 "allow_unrecognized_csi": false, 00:17:38.631 "method": "bdev_nvme_attach_controller", 00:17:38.631 "req_id": 1 00:17:38.631 } 00:17:38.631 Got JSON-RPC error response 00:17:38.631 response: 00:17:38.631 { 00:17:38.631 "code": -5, 00:17:38.631 "message": "Input/output error" 00:17:38.631 } 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:38.631 18:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:39.196 request: 00:17:39.196 { 00:17:39.196 "name": "nvme0", 00:17:39.196 "trtype": "tcp", 00:17:39.197 "traddr": "10.0.0.2", 00:17:39.197 "adrfam": "ipv4", 00:17:39.197 "trsvcid": "4420", 00:17:39.197 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:39.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:39.197 "prchk_reftag": false, 00:17:39.197 "prchk_guard": false, 00:17:39.197 "hdgst": false, 00:17:39.197 "ddgst": false, 00:17:39.197 "dhchap_key": "key1", 00:17:39.197 "dhchap_ctrlr_key": "ckey2", 00:17:39.197 "allow_unrecognized_csi": false, 00:17:39.197 "method": "bdev_nvme_attach_controller", 00:17:39.197 "req_id": 1 00:17:39.197 } 00:17:39.197 Got JSON-RPC error response 00:17:39.197 response: 00:17:39.197 { 00:17:39.197 "code": -5, 00:17:39.197 "message": "Input/output error" 00:17:39.197 } 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.197 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.129 request: 00:17:40.129 { 00:17:40.129 "name": "nvme0", 00:17:40.129 "trtype": "tcp", 00:17:40.129 "traddr": "10.0.0.2", 00:17:40.129 "adrfam": "ipv4", 00:17:40.129 "trsvcid": "4420", 00:17:40.129 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:40.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:40.129 "prchk_reftag": false, 00:17:40.129 "prchk_guard": false, 00:17:40.129 "hdgst": false, 00:17:40.129 "ddgst": false, 00:17:40.129 "dhchap_key": "key1", 00:17:40.129 "dhchap_ctrlr_key": "ckey1", 00:17:40.129 "allow_unrecognized_csi": false, 00:17:40.129 "method": "bdev_nvme_attach_controller", 00:17:40.129 "req_id": 1 00:17:40.129 } 00:17:40.129 Got JSON-RPC error response 00:17:40.129 response: 00:17:40.129 { 00:17:40.129 "code": -5, 00:17:40.129 "message": "Input/output error" 00:17:40.129 } 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1236221 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1236221 ']' 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1236221 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1236221 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1236221' 00:17:40.129 killing process with pid 1236221 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1236221 00:17:40.129 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1236221 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1258902 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1258902 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1258902 ']' 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.387 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1258902 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1258902 ']' 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.645 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.903 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.903 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:40.903 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:40.903 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.903 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.161 null0 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pW0 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.b3c ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.b3c 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4Kf 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Qq7 ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qq7 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.tST 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.g6G ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.g6G 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.47e 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:41.161 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.162 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.162 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.162 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:41.162 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:41.162 18:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.535 nvme0n1 00:17:42.535 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.535 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.535 18:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.101 { 00:17:43.101 "cntlid": 1, 00:17:43.101 "qid": 0, 00:17:43.101 "state": "enabled", 00:17:43.101 "thread": "nvmf_tgt_poll_group_000", 00:17:43.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:43.101 "listen_address": { 00:17:43.101 "trtype": "TCP", 00:17:43.101 "adrfam": "IPv4", 00:17:43.101 "traddr": "10.0.0.2", 00:17:43.101 "trsvcid": "4420" 00:17:43.101 }, 00:17:43.101 "peer_address": { 00:17:43.101 "trtype": "TCP", 00:17:43.101 "adrfam": "IPv4", 00:17:43.101 "traddr": "10.0.0.1", 00:17:43.101 "trsvcid": "39868" 00:17:43.101 }, 00:17:43.101 "auth": { 00:17:43.101 "state": "completed", 00:17:43.101 "digest": "sha512", 00:17:43.101 "dhgroup": "ffdhe8192" 00:17:43.101 } 00:17:43.101 } 00:17:43.101 ]' 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.101 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.359 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:43.359 18:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:44.293 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:44.552 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:44.552 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:44.552 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:44.552 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:44.552 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.552 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:44.552 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.552 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.552 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.552 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.809 request: 00:17:44.809 { 00:17:44.809 "name": "nvme0", 00:17:44.809 "trtype": "tcp", 00:17:44.809 "traddr": "10.0.0.2", 00:17:44.809 "adrfam": "ipv4", 00:17:44.809 "trsvcid": "4420", 00:17:44.809 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:44.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:44.809 "prchk_reftag": false, 00:17:44.809 "prchk_guard": false, 00:17:44.809 "hdgst": false, 00:17:44.809 "ddgst": false, 00:17:44.809 "dhchap_key": "key3", 00:17:44.809 "allow_unrecognized_csi": false, 00:17:44.809 "method": "bdev_nvme_attach_controller", 00:17:44.809 "req_id": 1 00:17:44.809 } 00:17:44.809 Got JSON-RPC error response 00:17:44.809 response: 00:17:44.809 { 00:17:44.809 "code": -5, 00:17:44.809 "message": "Input/output error" 00:17:44.809 } 00:17:44.809 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:44.809 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.809 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.809 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.809 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:44.809 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:44.809 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:44.809 18:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:45.067 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:45.067 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:45.067 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:45.067 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:45.067 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.067 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:45.067 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.067 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.067 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.067 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.325 request: 00:17:45.325 { 00:17:45.325 "name": "nvme0", 00:17:45.325 "trtype": "tcp", 00:17:45.325 "traddr": "10.0.0.2", 00:17:45.325 "adrfam": "ipv4", 00:17:45.325 "trsvcid": "4420", 00:17:45.325 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:45.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:45.325 "prchk_reftag": false, 00:17:45.325 "prchk_guard": false, 00:17:45.325 "hdgst": false, 00:17:45.325 "ddgst": false, 00:17:45.325 "dhchap_key": "key3", 00:17:45.325 "allow_unrecognized_csi": false, 00:17:45.325 "method": "bdev_nvme_attach_controller", 00:17:45.325 "req_id": 1 00:17:45.325 } 00:17:45.325 Got JSON-RPC error response 00:17:45.325 response: 00:17:45.325 { 00:17:45.325 "code": -5, 00:17:45.325 "message": "Input/output error" 00:17:45.325 } 00:17:45.325 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:45.325 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:45.325 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:45.325 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:45.325 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:45.325 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:45.325 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:45.325 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.325 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.325 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:45.582 18:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:46.147 request: 00:17:46.147 { 00:17:46.147 "name": "nvme0", 00:17:46.147 "trtype": "tcp", 00:17:46.147 "traddr": "10.0.0.2", 00:17:46.147 "adrfam": "ipv4", 00:17:46.147 "trsvcid": "4420", 00:17:46.147 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:46.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:46.147 "prchk_reftag": false, 00:17:46.147 "prchk_guard": false, 00:17:46.147 "hdgst": false, 00:17:46.147 "ddgst": false, 00:17:46.147 "dhchap_key": "key0", 00:17:46.147 "dhchap_ctrlr_key": "key1", 00:17:46.147 "allow_unrecognized_csi": false, 00:17:46.147 "method": "bdev_nvme_attach_controller", 00:17:46.147 "req_id": 1 00:17:46.147 } 00:17:46.147 Got JSON-RPC error response 00:17:46.147 response: 00:17:46.147 { 00:17:46.147 "code": -5, 00:17:46.147 "message": "Input/output error" 00:17:46.147 } 00:17:46.147 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:46.147 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:46.147 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:46.147 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:46.147 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:46.147 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:46.147 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:46.404 nvme0n1 00:17:46.404 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:46.404 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.404 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:46.662 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.662 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.662 18:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.921 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:17:46.921 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.921 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.921 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.921 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:46.921 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:46.921 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:48.295 nvme0n1 00:17:48.295 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:48.295 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:48.295 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.553 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.553 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:48.553 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.553 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.553 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.553 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:48.553 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:48.553 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.811 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.811 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:48.811 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: --dhchap-ctrl-secret DHHC-1:03:MTBhODI1NDc5YzRhNzVmYmU5MWJiYjc4YWZiNTEzOTJjNjEzOWE4MDQ1ZjE4Yzg2MDQ2MWVkNDZjODE1YzQ4OIYUh7k=: 00:17:49.744 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:49.744 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:49.744 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:49.744 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:49.744 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:49.744 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:49.744 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:49.744 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.744 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.002 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:50.002 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:50.002 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:50.002 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:50.002 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.002 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:50.002 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:50.002 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:50.002 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:50.002 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:50.934 request: 00:17:50.934 { 00:17:50.934 "name": "nvme0", 00:17:50.934 "trtype": "tcp", 00:17:50.934 "traddr": "10.0.0.2", 00:17:50.934 "adrfam": "ipv4", 00:17:50.934 "trsvcid": "4420", 00:17:50.934 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:50.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:50.934 "prchk_reftag": false, 00:17:50.934 "prchk_guard": false, 00:17:50.934 "hdgst": false, 00:17:50.934 "ddgst": false, 00:17:50.934 "dhchap_key": "key1", 00:17:50.934 "allow_unrecognized_csi": false, 00:17:50.934 "method": "bdev_nvme_attach_controller", 00:17:50.934 "req_id": 1 00:17:50.934 } 00:17:50.934 Got JSON-RPC error response 00:17:50.934 response: 00:17:50.934 { 00:17:50.934 "code": -5, 00:17:50.934 "message": "Input/output error" 00:17:50.934 } 00:17:50.934 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:50.934 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:50.934 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:50.934 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:50.934 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:50.934 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:50.934 18:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:52.375 nvme0n1 00:17:52.375 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:52.375 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:52.375 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.375 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.375 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.375 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.940 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:52.940 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.940 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.940 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.940 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:52.940 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:52.940 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:53.198 nvme0n1 00:17:53.198 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:53.198 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:53.198 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.456 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.456 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.456 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: '' 2s 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: ]] 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTQ3MmM0NDc2MjFlMjQxNDRlM2JlNmViZDFmZDkwZDgcKwWg: 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:53.714 18:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: 2s 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: ]] 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZjM5YTY0NmEyYjhmMzYxODM0NmE2NWYzODMzNTUxNzA0MmY5MjAzODI0MjUyNzIxg3dVxg==: 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:55.613 18:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:58.140 18:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:59.073 nvme0n1 00:17:59.073 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:59.073 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.073 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.073 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.073 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:59.073 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:00.006 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:00.006 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:00.006 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.267 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.267 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:00.267 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.267 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.267 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.267 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:00.267 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:00.526 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:00.526 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.526 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:00.784 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:01.717 request: 00:18:01.717 { 00:18:01.717 "name": "nvme0", 00:18:01.717 "dhchap_key": "key1", 00:18:01.717 "dhchap_ctrlr_key": "key3", 00:18:01.717 "method": "bdev_nvme_set_keys", 00:18:01.717 "req_id": 1 00:18:01.717 } 00:18:01.717 Got JSON-RPC error response 00:18:01.717 response: 00:18:01.717 { 00:18:01.717 "code": -13, 00:18:01.717 "message": "Permission denied" 00:18:01.717 } 00:18:01.717 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:01.717 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.717 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.717 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.717 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:01.717 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:01.717 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.717 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:01.717 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:03.091 18:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:04.465 nvme0n1 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:04.465 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:05.398 request: 00:18:05.398 { 00:18:05.398 "name": "nvme0", 00:18:05.398 "dhchap_key": "key2", 00:18:05.398 "dhchap_ctrlr_key": "key0", 00:18:05.398 "method": "bdev_nvme_set_keys", 00:18:05.398 "req_id": 1 00:18:05.398 } 00:18:05.398 Got JSON-RPC error response 00:18:05.398 response: 00:18:05.398 { 00:18:05.398 "code": -13, 00:18:05.398 "message": "Permission denied" 00:18:05.398 } 00:18:05.398 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:05.398 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:05.398 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:05.398 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:05.398 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:05.398 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.398 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:05.656 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:05.656 18:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:06.590 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:06.590 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:06.590 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1236240 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1236240 ']' 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1236240 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1236240 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1236240' 00:18:06.848 killing process with pid 1236240 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1236240 00:18:06.848 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1236240 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.414 rmmod nvme_tcp 00:18:07.414 rmmod nvme_fabrics 00:18:07.414 rmmod nvme_keyring 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1258902 ']' 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1258902 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1258902 ']' 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1258902 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1258902 00:18:07.414 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:07.415 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:07.415 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1258902' 00:18:07.415 killing process with pid 1258902 00:18:07.415 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1258902 00:18:07.415 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1258902 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.674 18:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.211 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:10.211 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pW0 /tmp/spdk.key-sha256.4Kf /tmp/spdk.key-sha384.tST /tmp/spdk.key-sha512.47e /tmp/spdk.key-sha512.b3c /tmp/spdk.key-sha384.Qq7 /tmp/spdk.key-sha256.g6G '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:10.211 00:18:10.211 real 3m30.757s 00:18:10.211 user 8m14.457s 00:18:10.211 sys 0m28.088s 00:18:10.211 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:10.211 18:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.211 ************************************ 00:18:10.211 END TEST nvmf_auth_target 00:18:10.211 ************************************ 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.211 ************************************ 00:18:10.211 START TEST nvmf_bdevio_no_huge 00:18:10.211 ************************************ 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:10.211 * Looking for test storage... 00:18:10.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.211 --rc genhtml_branch_coverage=1 00:18:10.211 --rc genhtml_function_coverage=1 00:18:10.211 --rc genhtml_legend=1 00:18:10.211 --rc geninfo_all_blocks=1 00:18:10.211 --rc geninfo_unexecuted_blocks=1 00:18:10.211 00:18:10.211 ' 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.211 --rc genhtml_branch_coverage=1 00:18:10.211 --rc genhtml_function_coverage=1 00:18:10.211 --rc genhtml_legend=1 00:18:10.211 --rc geninfo_all_blocks=1 00:18:10.211 --rc geninfo_unexecuted_blocks=1 00:18:10.211 00:18:10.211 ' 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.211 --rc genhtml_branch_coverage=1 00:18:10.211 --rc genhtml_function_coverage=1 00:18:10.211 --rc genhtml_legend=1 00:18:10.211 --rc geninfo_all_blocks=1 00:18:10.211 --rc geninfo_unexecuted_blocks=1 00:18:10.211 00:18:10.211 ' 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:10.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.211 --rc genhtml_branch_coverage=1 00:18:10.211 --rc genhtml_function_coverage=1 00:18:10.211 --rc genhtml_legend=1 00:18:10.211 --rc geninfo_all_blocks=1 00:18:10.211 --rc geninfo_unexecuted_blocks=1 00:18:10.211 00:18:10.211 ' 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.211 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:10.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:10.212 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.114 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:12.374 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:12.374 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:12.374 Found net devices under 0000:09:00.0: cvl_0_0 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:12.374 Found net devices under 0000:09:00.1: cvl_0_1 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:12.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:18:12.374 00:18:12.374 --- 10.0.0.2 ping statistics --- 00:18:12.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.374 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:18:12.374 00:18:12.374 --- 10.0.0.1 ping statistics --- 00:18:12.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.374 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1264174 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1264174 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1264174 ']' 00:18:12.374 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.375 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.375 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.375 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.375 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.375 [2024-10-08 18:25:30.621209] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:18:12.375 [2024-10-08 18:25:30.621320] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:12.632 [2024-10-08 18:25:30.697404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:12.632 [2024-10-08 18:25:30.807960] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.632 [2024-10-08 18:25:30.808034] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.632 [2024-10-08 18:25:30.808047] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.633 [2024-10-08 18:25:30.808058] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.633 [2024-10-08 18:25:30.808083] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.633 [2024-10-08 18:25:30.809142] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:18:12.633 [2024-10-08 18:25:30.809206] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:18:12.633 [2024-10-08 18:25:30.809268] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:18:12.633 [2024-10-08 18:25:30.809271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.891 [2024-10-08 18:25:30.965796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.891 Malloc0 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.891 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:12.891 [2024-10-08 18:25:31.004121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:12.891 { 00:18:12.891 "params": { 00:18:12.891 "name": "Nvme$subsystem", 00:18:12.891 "trtype": "$TEST_TRANSPORT", 00:18:12.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.891 "adrfam": "ipv4", 00:18:12.891 "trsvcid": "$NVMF_PORT", 00:18:12.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.891 "hdgst": ${hdgst:-false}, 00:18:12.891 "ddgst": ${ddgst:-false} 00:18:12.891 }, 00:18:12.891 "method": "bdev_nvme_attach_controller" 00:18:12.891 } 00:18:12.891 EOF 00:18:12.891 )") 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:18:12.891 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:12.891 "params": { 00:18:12.891 "name": "Nvme1", 00:18:12.891 "trtype": "tcp", 00:18:12.891 "traddr": "10.0.0.2", 00:18:12.891 "adrfam": "ipv4", 00:18:12.891 "trsvcid": "4420", 00:18:12.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.891 "hdgst": false, 00:18:12.891 "ddgst": false 00:18:12.891 }, 00:18:12.891 "method": "bdev_nvme_attach_controller" 00:18:12.891 }' 00:18:12.891 [2024-10-08 18:25:31.057087] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:18:12.891 [2024-10-08 18:25:31.057183] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1264220 ] 00:18:12.891 [2024-10-08 18:25:31.124679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:13.149 [2024-10-08 18:25:31.244703] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.149 [2024-10-08 18:25:31.244751] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.149 [2024-10-08 18:25:31.244755] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.407 I/O targets: 00:18:13.407 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:13.407 00:18:13.407 00:18:13.407 CUnit - A unit testing framework for C - Version 2.1-3 00:18:13.407 http://cunit.sourceforge.net/ 00:18:13.407 00:18:13.407 00:18:13.407 Suite: bdevio tests on: Nvme1n1 00:18:13.407 Test: blockdev write read block ...passed 00:18:13.407 Test: blockdev write zeroes read block ...passed 00:18:13.407 Test: blockdev write zeroes read no split ...passed 00:18:13.407 Test: blockdev write zeroes read split ...passed 00:18:13.407 Test: blockdev write zeroes read split partial ...passed 00:18:13.407 Test: blockdev reset ...[2024-10-08 18:25:31.601091] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:13.407 [2024-10-08 18:25:31.601200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd587e0 (9): Bad file descriptor 00:18:13.665 [2024-10-08 18:25:31.738630] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:13.665 passed 00:18:13.665 Test: blockdev write read 8 blocks ...passed 00:18:13.665 Test: blockdev write read size > 128k ...passed 00:18:13.665 Test: blockdev write read invalid size ...passed 00:18:13.665 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:13.665 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:13.665 Test: blockdev write read max offset ...passed 00:18:13.665 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:13.665 Test: blockdev writev readv 8 blocks ...passed 00:18:13.665 Test: blockdev writev readv 30 x 1block ...passed 00:18:13.665 Test: blockdev writev readv block ...passed 00:18:13.665 Test: blockdev writev readv size > 128k ...passed 00:18:13.665 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:13.665 Test: blockdev comparev and writev ...[2024-10-08 18:25:31.951604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.665 [2024-10-08 18:25:31.951639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.665 [2024-10-08 18:25:31.951663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.665 [2024-10-08 18:25:31.951680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:13.665 [2024-10-08 18:25:31.952058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.665 [2024-10-08 18:25:31.952083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:13.665 [2024-10-08 18:25:31.952105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.665 [2024-10-08 18:25:31.952122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:13.665 [2024-10-08 18:25:31.952499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.665 [2024-10-08 18:25:31.952524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:13.665 [2024-10-08 18:25:31.952545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.923 [2024-10-08 18:25:31.952561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:13.923 [2024-10-08 18:25:31.952894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.923 [2024-10-08 18:25:31.952918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:13.923 [2024-10-08 18:25:31.952938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:13.923 [2024-10-08 18:25:31.952954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:13.923 passed 00:18:13.923 Test: blockdev nvme passthru rw ...passed 00:18:13.923 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:25:32.036584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:13.923 [2024-10-08 18:25:32.036612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:13.923 [2024-10-08 18:25:32.036748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:13.923 [2024-10-08 18:25:32.036771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:13.923 [2024-10-08 18:25:32.036914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:13.923 [2024-10-08 18:25:32.036936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:13.923 [2024-10-08 18:25:32.037071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:13.923 [2024-10-08 18:25:32.037094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:13.923 passed 00:18:13.923 Test: blockdev nvme admin passthru ...passed 00:18:13.923 Test: blockdev copy ...passed 00:18:13.923 00:18:13.923 Run Summary: Type Total Ran Passed Failed Inactive 00:18:13.923 suites 1 1 n/a 0 0 00:18:13.923 tests 23 23 23 0 0 00:18:13.923 asserts 152 152 152 0 n/a 00:18:13.923 00:18:13.923 Elapsed time = 1.308 seconds 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:14.489 rmmod nvme_tcp 00:18:14.489 rmmod nvme_fabrics 00:18:14.489 rmmod nvme_keyring 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1264174 ']' 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1264174 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1264174 ']' 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1264174 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1264174 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1264174' 00:18:14.489 killing process with pid 1264174 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1264174 00:18:14.489 18:25:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1264174 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.056 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:16.959 00:18:16.959 real 0m7.057s 00:18:16.959 user 0m11.677s 00:18:16.959 sys 0m2.843s 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:16.959 ************************************ 00:18:16.959 END TEST nvmf_bdevio_no_huge 00:18:16.959 ************************************ 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:16.959 ************************************ 00:18:16.959 START TEST nvmf_tls 00:18:16.959 ************************************ 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:16.959 * Looking for test storage... 00:18:16.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:18:16.959 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:17.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.219 --rc genhtml_branch_coverage=1 00:18:17.219 --rc genhtml_function_coverage=1 00:18:17.219 --rc genhtml_legend=1 00:18:17.219 --rc geninfo_all_blocks=1 00:18:17.219 --rc geninfo_unexecuted_blocks=1 00:18:17.219 00:18:17.219 ' 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:17.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.219 --rc genhtml_branch_coverage=1 00:18:17.219 --rc genhtml_function_coverage=1 00:18:17.219 --rc genhtml_legend=1 00:18:17.219 --rc geninfo_all_blocks=1 00:18:17.219 --rc geninfo_unexecuted_blocks=1 00:18:17.219 00:18:17.219 ' 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:17.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.219 --rc genhtml_branch_coverage=1 00:18:17.219 --rc genhtml_function_coverage=1 00:18:17.219 --rc genhtml_legend=1 00:18:17.219 --rc geninfo_all_blocks=1 00:18:17.219 --rc geninfo_unexecuted_blocks=1 00:18:17.219 00:18:17.219 ' 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:17.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.219 --rc genhtml_branch_coverage=1 00:18:17.219 --rc genhtml_function_coverage=1 00:18:17.219 --rc genhtml_legend=1 00:18:17.219 --rc geninfo_all_blocks=1 00:18:17.219 --rc geninfo_unexecuted_blocks=1 00:18:17.219 00:18:17.219 ' 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.219 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:17.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:17.220 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:19.122 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:19.123 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:19.123 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:19.123 Found net devices under 0000:09:00.0: cvl_0_0 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:19.123 Found net devices under 0000:09:00.1: cvl_0_1 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:19.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:18:19.123 00:18:19.123 --- 10.0.0.2 ping statistics --- 00:18:19.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.123 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:18:19.123 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:18:19.123 00:18:19.123 --- 10.0.0.1 ping statistics --- 00:18:19.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.123 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1266406 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1266406 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1266406 ']' 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.382 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.382 [2024-10-08 18:25:37.489077] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:18:19.382 [2024-10-08 18:25:37.489180] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.382 [2024-10-08 18:25:37.556670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.382 [2024-10-08 18:25:37.667138] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.382 [2024-10-08 18:25:37.667191] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.382 [2024-10-08 18:25:37.667219] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.382 [2024-10-08 18:25:37.667240] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.382 [2024-10-08 18:25:37.667250] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.382 [2024-10-08 18:25:37.667944] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.640 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.640 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:19.640 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:19.640 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:19.640 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.640 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.640 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:19.640 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:19.899 true 00:18:19.899 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.899 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:20.157 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:20.157 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:20.157 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:20.415 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.415 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:20.673 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:20.673 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:20.673 18:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:20.931 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:20.931 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:21.189 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:21.189 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:21.189 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.189 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:21.447 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:21.447 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:21.447 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:21.705 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:21.705 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:22.270 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:22.270 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:22.270 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:22.528 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:22.528 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.WHQ3tZooah 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.gx0K6o5M3m 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.WHQ3tZooah 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.gx0K6o5M3m 00:18:22.786 18:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:23.045 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:23.612 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.WHQ3tZooah 00:18:23.612 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.WHQ3tZooah 00:18:23.612 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:23.870 [2024-10-08 18:25:41.941525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.870 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:24.128 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:24.386 [2024-10-08 18:25:42.595307] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.386 [2024-10-08 18:25:42.595566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.386 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:24.950 malloc0 00:18:24.950 18:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:25.240 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.WHQ3tZooah 00:18:25.523 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:25.523 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.WHQ3tZooah 00:18:37.752 Initializing NVMe Controllers 00:18:37.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:37.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:37.752 Initialization complete. Launching workers. 00:18:37.752 ======================================================== 00:18:37.752 Latency(us) 00:18:37.752 Device Information : IOPS MiB/s Average min max 00:18:37.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8620.68 33.67 7426.10 1065.21 10684.63 00:18:37.752 ======================================================== 00:18:37.752 Total : 8620.68 33.67 7426.10 1065.21 10684.63 00:18:37.752 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WHQ3tZooah 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WHQ3tZooah 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1268430 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1268430 /var/tmp/bdevperf.sock 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1268430 ']' 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.752 18:25:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.752 [2024-10-08 18:25:53.945932] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:18:37.752 [2024-10-08 18:25:53.946012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1268430 ] 00:18:37.752 [2024-10-08 18:25:54.004971] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.752 [2024-10-08 18:25:54.113246] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.752 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.752 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:37.752 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WHQ3tZooah 00:18:37.752 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:37.752 [2024-10-08 18:25:54.797389] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.752 TLSTESTn1 00:18:37.752 18:25:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:37.752 Running I/O for 10 seconds... 00:18:39.124 3473.00 IOPS, 13.57 MiB/s [2024-10-08T16:25:58.346Z] 3588.50 IOPS, 14.02 MiB/s [2024-10-08T16:25:59.278Z] 3609.33 IOPS, 14.10 MiB/s [2024-10-08T16:26:00.211Z] 3604.75 IOPS, 14.08 MiB/s [2024-10-08T16:26:01.145Z] 3611.20 IOPS, 14.11 MiB/s [2024-10-08T16:26:02.079Z] 3609.00 IOPS, 14.10 MiB/s [2024-10-08T16:26:03.458Z] 3618.00 IOPS, 14.13 MiB/s [2024-10-08T16:26:04.390Z] 3624.38 IOPS, 14.16 MiB/s [2024-10-08T16:26:05.322Z] 3627.89 IOPS, 14.17 MiB/s [2024-10-08T16:26:05.322Z] 3618.90 IOPS, 14.14 MiB/s 00:18:47.035 Latency(us) 00:18:47.035 [2024-10-08T16:26:05.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.035 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:47.035 Verification LBA range: start 0x0 length 0x2000 00:18:47.035 TLSTESTn1 : 10.02 3622.96 14.15 0.00 0.00 35263.67 10874.12 33787.45 00:18:47.035 [2024-10-08T16:26:05.322Z] =================================================================================================================== 00:18:47.035 [2024-10-08T16:26:05.322Z] Total : 3622.96 14.15 0.00 0.00 35263.67 10874.12 33787.45 00:18:47.035 { 00:18:47.035 "results": [ 00:18:47.035 { 00:18:47.035 "job": "TLSTESTn1", 00:18:47.035 "core_mask": "0x4", 00:18:47.035 "workload": "verify", 00:18:47.035 "status": "finished", 00:18:47.035 "verify_range": { 00:18:47.035 "start": 0, 00:18:47.035 "length": 8192 00:18:47.035 }, 00:18:47.035 "queue_depth": 128, 00:18:47.035 "io_size": 4096, 00:18:47.035 "runtime": 10.023578, 00:18:47.035 "iops": 3622.9577901224493, 00:18:47.035 "mibps": 14.152178867665818, 00:18:47.035 "io_failed": 0, 00:18:47.035 "io_timeout": 0, 00:18:47.035 "avg_latency_us": 35263.666329656655, 00:18:47.035 "min_latency_us": 10874.121481481481, 00:18:47.035 "max_latency_us": 33787.44888888889 00:18:47.035 } 00:18:47.035 ], 00:18:47.036 "core_count": 1 00:18:47.036 } 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1268430 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1268430 ']' 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1268430 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1268430 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1268430' 00:18:47.036 killing process with pid 1268430 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1268430 00:18:47.036 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.036 00:18:47.036 Latency(us) 00:18:47.036 [2024-10-08T16:26:05.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.036 [2024-10-08T16:26:05.323Z] =================================================================================================================== 00:18:47.036 [2024-10-08T16:26:05.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.036 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1268430 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gx0K6o5M3m 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gx0K6o5M3m 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gx0K6o5M3m 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gx0K6o5M3m 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1269749 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1269749 /var/tmp/bdevperf.sock 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1269749 ']' 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.294 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.294 [2024-10-08 18:26:05.444758] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:18:47.294 [2024-10-08 18:26:05.444840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269749 ] 00:18:47.294 [2024-10-08 18:26:05.504955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.552 [2024-10-08 18:26:05.614250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.552 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:47.552 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:47.552 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gx0K6o5M3m 00:18:47.810 18:26:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:48.067 [2024-10-08 18:26:06.241152] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.067 [2024-10-08 18:26:06.246748] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:48.067 [2024-10-08 18:26:06.247301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a92380 (107): Transport endpoint is not connected 00:18:48.067 [2024-10-08 18:26:06.248267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a92380 (9): Bad file descriptor 00:18:48.067 [2024-10-08 18:26:06.249265] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:48.067 [2024-10-08 18:26:06.249308] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:48.067 [2024-10-08 18:26:06.249323] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:48.067 [2024-10-08 18:26:06.249343] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:48.067 request: 00:18:48.067 { 00:18:48.067 "name": "TLSTEST", 00:18:48.067 "trtype": "tcp", 00:18:48.067 "traddr": "10.0.0.2", 00:18:48.067 "adrfam": "ipv4", 00:18:48.067 "trsvcid": "4420", 00:18:48.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:48.067 "prchk_reftag": false, 00:18:48.067 "prchk_guard": false, 00:18:48.067 "hdgst": false, 00:18:48.067 "ddgst": false, 00:18:48.067 "psk": "key0", 00:18:48.067 "allow_unrecognized_csi": false, 00:18:48.067 "method": "bdev_nvme_attach_controller", 00:18:48.067 "req_id": 1 00:18:48.067 } 00:18:48.067 Got JSON-RPC error response 00:18:48.067 response: 00:18:48.067 { 00:18:48.067 "code": -5, 00:18:48.067 "message": "Input/output error" 00:18:48.067 } 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1269749 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1269749 ']' 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1269749 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1269749 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1269749' 00:18:48.067 killing process with pid 1269749 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1269749 00:18:48.067 Received shutdown signal, test time was about 10.000000 seconds 00:18:48.067 00:18:48.067 Latency(us) 00:18:48.067 [2024-10-08T16:26:06.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.067 [2024-10-08T16:26:06.354Z] =================================================================================================================== 00:18:48.067 [2024-10-08T16:26:06.354Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:48.067 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1269749 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WHQ3tZooah 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WHQ3tZooah 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.WHQ3tZooah 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WHQ3tZooah 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1269893 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1269893 /var/tmp/bdevperf.sock 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1269893 ']' 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:48.325 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.325 [2024-10-08 18:26:06.609758] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:18:48.325 [2024-10-08 18:26:06.609844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1269893 ] 00:18:48.583 [2024-10-08 18:26:06.668766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.583 [2024-10-08 18:26:06.776458] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.841 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.841 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:48.841 18:26:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WHQ3tZooah 00:18:49.098 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:49.356 [2024-10-08 18:26:07.390693] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.356 [2024-10-08 18:26:07.397417] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:49.356 [2024-10-08 18:26:07.397447] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:49.356 [2024-10-08 18:26:07.397498] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:49.356 [2024-10-08 18:26:07.397919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe77380 (107): Transport endpoint is not connected 00:18:49.356 [2024-10-08 18:26:07.398909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe77380 (9): Bad file descriptor 00:18:49.356 [2024-10-08 18:26:07.399908] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:49.356 [2024-10-08 18:26:07.399926] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:49.356 [2024-10-08 18:26:07.399954] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:49.356 [2024-10-08 18:26:07.399973] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:49.356 request: 00:18:49.356 { 00:18:49.356 "name": "TLSTEST", 00:18:49.356 "trtype": "tcp", 00:18:49.356 "traddr": "10.0.0.2", 00:18:49.356 "adrfam": "ipv4", 00:18:49.356 "trsvcid": "4420", 00:18:49.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.356 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:49.356 "prchk_reftag": false, 00:18:49.356 "prchk_guard": false, 00:18:49.356 "hdgst": false, 00:18:49.356 "ddgst": false, 00:18:49.356 "psk": "key0", 00:18:49.356 "allow_unrecognized_csi": false, 00:18:49.356 "method": "bdev_nvme_attach_controller", 00:18:49.356 "req_id": 1 00:18:49.356 } 00:18:49.356 Got JSON-RPC error response 00:18:49.356 response: 00:18:49.356 { 00:18:49.356 "code": -5, 00:18:49.356 "message": "Input/output error" 00:18:49.356 } 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1269893 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1269893 ']' 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1269893 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1269893 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1269893' 00:18:49.356 killing process with pid 1269893 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1269893 00:18:49.356 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.356 00:18:49.356 Latency(us) 00:18:49.356 [2024-10-08T16:26:07.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.356 [2024-10-08T16:26:07.643Z] =================================================================================================================== 00:18:49.356 [2024-10-08T16:26:07.643Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:49.356 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1269893 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WHQ3tZooah 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WHQ3tZooah 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.WHQ3tZooah 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WHQ3tZooah 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1270033 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1270033 /var/tmp/bdevperf.sock 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1270033 ']' 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.614 18:26:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.614 [2024-10-08 18:26:07.751483] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:18:49.614 [2024-10-08 18:26:07.751560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270033 ] 00:18:49.614 [2024-10-08 18:26:07.809586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.872 [2024-10-08 18:26:07.921954] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.872 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:49.872 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:49.872 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WHQ3tZooah 00:18:50.131 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:50.389 [2024-10-08 18:26:08.527799] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.389 [2024-10-08 18:26:08.533131] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:50.389 [2024-10-08 18:26:08.533162] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:50.389 [2024-10-08 18:26:08.533217] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:50.389 [2024-10-08 18:26:08.533733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e0380 (107): Transport endpoint is not connected 00:18:50.389 [2024-10-08 18:26:08.534720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e0380 (9): Bad file descriptor 00:18:50.389 [2024-10-08 18:26:08.535719] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:50.389 [2024-10-08 18:26:08.535739] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:50.389 [2024-10-08 18:26:08.535766] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:50.389 [2024-10-08 18:26:08.535785] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:50.389 request: 00:18:50.389 { 00:18:50.389 "name": "TLSTEST", 00:18:50.389 "trtype": "tcp", 00:18:50.389 "traddr": "10.0.0.2", 00:18:50.389 "adrfam": "ipv4", 00:18:50.389 "trsvcid": "4420", 00:18:50.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:50.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.389 "prchk_reftag": false, 00:18:50.389 "prchk_guard": false, 00:18:50.389 "hdgst": false, 00:18:50.389 "ddgst": false, 00:18:50.389 "psk": "key0", 00:18:50.389 "allow_unrecognized_csi": false, 00:18:50.389 "method": "bdev_nvme_attach_controller", 00:18:50.389 "req_id": 1 00:18:50.389 } 00:18:50.389 Got JSON-RPC error response 00:18:50.389 response: 00:18:50.389 { 00:18:50.389 "code": -5, 00:18:50.389 "message": "Input/output error" 00:18:50.389 } 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1270033 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1270033 ']' 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1270033 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1270033 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1270033' 00:18:50.389 killing process with pid 1270033 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1270033 00:18:50.389 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.389 00:18:50.389 Latency(us) 00:18:50.389 [2024-10-08T16:26:08.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.389 [2024-10-08T16:26:08.676Z] =================================================================================================================== 00:18:50.389 [2024-10-08T16:26:08.676Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:50.389 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1270033 00:18:50.647 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:50.647 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:50.647 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:50.647 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1270172 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1270172 /var/tmp/bdevperf.sock 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1270172 ']' 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:50.648 18:26:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.648 [2024-10-08 18:26:08.909723] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:18:50.648 [2024-10-08 18:26:08.909797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270172 ] 00:18:50.906 [2024-10-08 18:26:08.969909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.906 [2024-10-08 18:26:09.076250] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.906 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.906 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:50.906 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:51.164 [2024-10-08 18:26:09.426688] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:51.164 [2024-10-08 18:26:09.426732] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:51.164 request: 00:18:51.164 { 00:18:51.164 "name": "key0", 00:18:51.164 "path": "", 00:18:51.164 "method": "keyring_file_add_key", 00:18:51.164 "req_id": 1 00:18:51.164 } 00:18:51.164 Got JSON-RPC error response 00:18:51.164 response: 00:18:51.164 { 00:18:51.164 "code": -1, 00:18:51.164 "message": "Operation not permitted" 00:18:51.164 } 00:18:51.164 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.422 [2024-10-08 18:26:09.687489] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.422 [2024-10-08 18:26:09.687544] bdev_nvme.c:6494:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:51.422 request: 00:18:51.422 { 00:18:51.422 "name": "TLSTEST", 00:18:51.422 "trtype": "tcp", 00:18:51.422 "traddr": "10.0.0.2", 00:18:51.422 "adrfam": "ipv4", 00:18:51.422 "trsvcid": "4420", 00:18:51.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.422 "prchk_reftag": false, 00:18:51.422 "prchk_guard": false, 00:18:51.422 "hdgst": false, 00:18:51.422 "ddgst": false, 00:18:51.422 "psk": "key0", 00:18:51.422 "allow_unrecognized_csi": false, 00:18:51.422 "method": "bdev_nvme_attach_controller", 00:18:51.422 "req_id": 1 00:18:51.422 } 00:18:51.422 Got JSON-RPC error response 00:18:51.422 response: 00:18:51.422 { 00:18:51.422 "code": -126, 00:18:51.422 "message": "Required key not available" 00:18:51.422 } 00:18:51.422 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1270172 00:18:51.422 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1270172 ']' 00:18:51.422 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1270172 00:18:51.422 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:51.680 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:51.680 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1270172 00:18:51.680 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:51.680 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:51.680 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1270172' 00:18:51.680 killing process with pid 1270172 00:18:51.680 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1270172 00:18:51.680 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.680 00:18:51.680 Latency(us) 00:18:51.680 [2024-10-08T16:26:09.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.680 [2024-10-08T16:26:09.967Z] =================================================================================================================== 00:18:51.680 [2024-10-08T16:26:09.967Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:51.680 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1270172 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1266406 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1266406 ']' 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1266406 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1266406 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1266406' 00:18:51.938 killing process with pid 1266406 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1266406 00:18:51.938 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1266406 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.NPWhb1Ovl0 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.NPWhb1Ovl0 00:18:52.199 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1270334 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1270334 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1270334 ']' 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:52.200 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.200 [2024-10-08 18:26:10.449929] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:18:52.200 [2024-10-08 18:26:10.450021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.459 [2024-10-08 18:26:10.515800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.459 [2024-10-08 18:26:10.617087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.459 [2024-10-08 18:26:10.617162] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.459 [2024-10-08 18:26:10.617175] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.459 [2024-10-08 18:26:10.617199] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.459 [2024-10-08 18:26:10.617218] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.459 [2024-10-08 18:26:10.617839] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.459 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:52.459 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:52.459 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:52.459 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:52.459 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.717 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:52.717 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.NPWhb1Ovl0 00:18:52.717 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NPWhb1Ovl0 00:18:52.717 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:52.976 [2024-10-08 18:26:11.017861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.976 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:53.234 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:53.492 [2024-10-08 18:26:11.551278] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.492 [2024-10-08 18:26:11.551535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.492 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:53.750 malloc0 00:18:53.750 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:54.008 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NPWhb1Ovl0 00:18:54.266 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.525 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPWhb1Ovl0 00:18:54.525 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:54.525 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:54.525 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:54.525 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NPWhb1Ovl0 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1270623 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1270623 /var/tmp/bdevperf.sock 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1270623 ']' 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:54.526 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.526 [2024-10-08 18:26:12.703847] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:18:54.526 [2024-10-08 18:26:12.703927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270623 ] 00:18:54.526 [2024-10-08 18:26:12.761693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.784 [2024-10-08 18:26:12.872085] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.784 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:54.784 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:54.784 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NPWhb1Ovl0 00:18:55.042 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:55.300 [2024-10-08 18:26:13.522676] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.558 TLSTESTn1 00:18:55.558 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:55.558 Running I/O for 10 seconds... 00:18:57.869 3430.00 IOPS, 13.40 MiB/s [2024-10-08T16:26:16.720Z] 3485.00 IOPS, 13.61 MiB/s [2024-10-08T16:26:18.093Z] 3522.00 IOPS, 13.76 MiB/s [2024-10-08T16:26:19.025Z] 3504.50 IOPS, 13.69 MiB/s [2024-10-08T16:26:19.958Z] 3510.80 IOPS, 13.71 MiB/s [2024-10-08T16:26:20.914Z] 3528.17 IOPS, 13.78 MiB/s [2024-10-08T16:26:21.847Z] 3529.43 IOPS, 13.79 MiB/s [2024-10-08T16:26:22.780Z] 3533.62 IOPS, 13.80 MiB/s [2024-10-08T16:26:24.203Z] 3536.44 IOPS, 13.81 MiB/s [2024-10-08T16:26:24.203Z] 3527.60 IOPS, 13.78 MiB/s 00:19:05.916 Latency(us) 00:19:05.916 [2024-10-08T16:26:24.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.916 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:05.916 Verification LBA range: start 0x0 length 0x2000 00:19:05.916 TLSTESTn1 : 10.04 3525.91 13.77 0.00 0.00 36234.37 9854.67 41748.86 00:19:05.916 [2024-10-08T16:26:24.203Z] =================================================================================================================== 00:19:05.916 [2024-10-08T16:26:24.203Z] Total : 3525.91 13.77 0.00 0.00 36234.37 9854.67 41748.86 00:19:05.916 { 00:19:05.916 "results": [ 00:19:05.916 { 00:19:05.916 "job": "TLSTESTn1", 00:19:05.916 "core_mask": "0x4", 00:19:05.916 "workload": "verify", 00:19:05.916 "status": "finished", 00:19:05.916 "verify_range": { 00:19:05.916 "start": 0, 00:19:05.916 "length": 8192 00:19:05.916 }, 00:19:05.916 "queue_depth": 128, 00:19:05.916 "io_size": 4096, 00:19:05.916 "runtime": 10.041107, 00:19:05.916 "iops": 3525.906057967513, 00:19:05.916 "mibps": 13.773070538935597, 00:19:05.916 "io_failed": 0, 00:19:05.916 "io_timeout": 0, 00:19:05.916 "avg_latency_us": 36234.37426831871, 00:19:05.916 "min_latency_us": 9854.672592592593, 00:19:05.916 "max_latency_us": 41748.85925925926 00:19:05.916 } 00:19:05.916 ], 00:19:05.916 "core_count": 1 00:19:05.916 } 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1270623 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1270623 ']' 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1270623 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1270623 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1270623' 00:19:05.916 killing process with pid 1270623 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1270623 00:19:05.916 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.916 00:19:05.916 Latency(us) 00:19:05.916 [2024-10-08T16:26:24.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.916 [2024-10-08T16:26:24.203Z] =================================================================================================================== 00:19:05.916 [2024-10-08T16:26:24.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.916 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1270623 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.NPWhb1Ovl0 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPWhb1Ovl0 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPWhb1Ovl0 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NPWhb1Ovl0 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NPWhb1Ovl0 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.916 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1271942 00:19:05.917 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.917 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1271942 /var/tmp/bdevperf.sock 00:19:05.917 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1271942 ']' 00:19:05.917 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.917 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.917 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.917 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.917 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.917 [2024-10-08 18:26:24.161369] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:05.917 [2024-10-08 18:26:24.161463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1271942 ] 00:19:06.175 [2024-10-08 18:26:24.220096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.175 [2024-10-08 18:26:24.326435] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.175 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:06.175 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:06.175 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NPWhb1Ovl0 00:19:06.432 [2024-10-08 18:26:24.695799] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NPWhb1Ovl0': 0100666 00:19:06.432 [2024-10-08 18:26:24.695836] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:06.432 request: 00:19:06.432 { 00:19:06.432 "name": "key0", 00:19:06.432 "path": "/tmp/tmp.NPWhb1Ovl0", 00:19:06.432 "method": "keyring_file_add_key", 00:19:06.432 "req_id": 1 00:19:06.432 } 00:19:06.432 Got JSON-RPC error response 00:19:06.432 response: 00:19:06.432 { 00:19:06.432 "code": -1, 00:19:06.432 "message": "Operation not permitted" 00:19:06.432 } 00:19:06.432 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:06.690 [2024-10-08 18:26:24.948568] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.690 [2024-10-08 18:26:24.948623] bdev_nvme.c:6494:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:06.690 request: 00:19:06.690 { 00:19:06.690 "name": "TLSTEST", 00:19:06.690 "trtype": "tcp", 00:19:06.690 "traddr": "10.0.0.2", 00:19:06.690 "adrfam": "ipv4", 00:19:06.690 "trsvcid": "4420", 00:19:06.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.690 "prchk_reftag": false, 00:19:06.690 "prchk_guard": false, 00:19:06.690 "hdgst": false, 00:19:06.690 "ddgst": false, 00:19:06.690 "psk": "key0", 00:19:06.690 "allow_unrecognized_csi": false, 00:19:06.690 "method": "bdev_nvme_attach_controller", 00:19:06.690 "req_id": 1 00:19:06.690 } 00:19:06.690 Got JSON-RPC error response 00:19:06.690 response: 00:19:06.690 { 00:19:06.690 "code": -126, 00:19:06.690 "message": "Required key not available" 00:19:06.690 } 00:19:06.690 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1271942 00:19:06.690 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1271942 ']' 00:19:06.690 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1271942 00:19:06.690 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:06.690 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:06.690 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1271942 00:19:06.948 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:06.948 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:06.948 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1271942' 00:19:06.948 killing process with pid 1271942 00:19:06.948 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1271942 00:19:06.948 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.948 00:19:06.948 Latency(us) 00:19:06.948 [2024-10-08T16:26:25.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.948 [2024-10-08T16:26:25.235Z] =================================================================================================================== 00:19:06.948 [2024-10-08T16:26:25.235Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:06.948 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1271942 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1270334 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1270334 ']' 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1270334 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1270334 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1270334' 00:19:07.206 killing process with pid 1270334 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1270334 00:19:07.206 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1270334 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1272137 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1272137 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1272137 ']' 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.464 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.464 [2024-10-08 18:26:25.655350] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:07.464 [2024-10-08 18:26:25.655431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.464 [2024-10-08 18:26:25.718082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.722 [2024-10-08 18:26:25.824511] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.722 [2024-10-08 18:26:25.824574] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.722 [2024-10-08 18:26:25.824604] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.722 [2024-10-08 18:26:25.824616] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.722 [2024-10-08 18:26:25.824625] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.722 [2024-10-08 18:26:25.825195] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.NPWhb1Ovl0 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.NPWhb1Ovl0 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:07.722 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.723 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.NPWhb1Ovl0 00:19:07.723 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NPWhb1Ovl0 00:19:07.723 18:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:07.980 [2024-10-08 18:26:26.200108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.980 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:08.238 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:08.496 [2024-10-08 18:26:26.741551] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.496 [2024-10-08 18:26:26.741836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.496 18:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:08.754 malloc0 00:19:09.011 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:09.269 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NPWhb1Ovl0 00:19:09.526 [2024-10-08 18:26:27.570318] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NPWhb1Ovl0': 0100666 00:19:09.526 [2024-10-08 18:26:27.570359] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:09.526 request: 00:19:09.526 { 00:19:09.526 "name": "key0", 00:19:09.526 "path": "/tmp/tmp.NPWhb1Ovl0", 00:19:09.526 "method": "keyring_file_add_key", 00:19:09.526 "req_id": 1 00:19:09.526 } 00:19:09.526 Got JSON-RPC error response 00:19:09.526 response: 00:19:09.526 { 00:19:09.526 "code": -1, 00:19:09.526 "message": "Operation not permitted" 00:19:09.526 } 00:19:09.526 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:09.784 [2024-10-08 18:26:27.835040] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:09.784 [2024-10-08 18:26:27.835099] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:09.784 request: 00:19:09.784 { 00:19:09.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.784 "host": "nqn.2016-06.io.spdk:host1", 00:19:09.784 "psk": "key0", 00:19:09.784 "method": "nvmf_subsystem_add_host", 00:19:09.784 "req_id": 1 00:19:09.784 } 00:19:09.784 Got JSON-RPC error response 00:19:09.784 response: 00:19:09.784 { 00:19:09.784 "code": -32603, 00:19:09.784 "message": "Internal error" 00:19:09.784 } 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1272137 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1272137 ']' 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1272137 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1272137 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1272137' 00:19:09.785 killing process with pid 1272137 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1272137 00:19:09.785 18:26:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1272137 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.NPWhb1Ovl0 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1272515 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1272515 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1272515 ']' 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.043 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.043 [2024-10-08 18:26:28.230125] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:10.043 [2024-10-08 18:26:28.230211] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.043 [2024-10-08 18:26:28.293432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.300 [2024-10-08 18:26:28.393382] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.300 [2024-10-08 18:26:28.393457] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.300 [2024-10-08 18:26:28.393471] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.300 [2024-10-08 18:26:28.393482] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.300 [2024-10-08 18:26:28.393491] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.300 [2024-10-08 18:26:28.394045] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.300 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.300 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:10.300 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:10.300 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.300 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.300 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.300 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.NPWhb1Ovl0 00:19:10.300 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NPWhb1Ovl0 00:19:10.300 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:10.558 [2024-10-08 18:26:28.788158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.558 18:26:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:10.815 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:11.073 [2024-10-08 18:26:29.325545] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.073 [2024-10-08 18:26:29.325814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.073 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:11.331 malloc0 00:19:11.588 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:11.846 18:26:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NPWhb1Ovl0 00:19:12.104 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:12.362 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1272802 00:19:12.362 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.362 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.362 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1272802 /var/tmp/bdevperf.sock 00:19:12.362 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1272802 ']' 00:19:12.362 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.362 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:12.362 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.362 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:12.362 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.362 [2024-10-08 18:26:30.510402] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:12.362 [2024-10-08 18:26:30.510481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272802 ] 00:19:12.362 [2024-10-08 18:26:30.571837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.620 [2024-10-08 18:26:30.681871] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.620 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.620 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:12.620 18:26:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NPWhb1Ovl0 00:19:12.880 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:13.137 [2024-10-08 18:26:31.318265] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.137 TLSTESTn1 00:19:13.137 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:13.702 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:13.702 "subsystems": [ 00:19:13.702 { 00:19:13.702 "subsystem": "keyring", 00:19:13.702 "config": [ 00:19:13.702 { 00:19:13.702 "method": "keyring_file_add_key", 00:19:13.702 "params": { 00:19:13.702 "name": "key0", 00:19:13.702 "path": "/tmp/tmp.NPWhb1Ovl0" 00:19:13.702 } 00:19:13.702 } 00:19:13.702 ] 00:19:13.702 }, 00:19:13.702 { 00:19:13.702 "subsystem": "iobuf", 00:19:13.702 "config": [ 00:19:13.702 { 00:19:13.702 "method": "iobuf_set_options", 00:19:13.702 "params": { 00:19:13.702 "small_pool_count": 8192, 00:19:13.702 "large_pool_count": 1024, 00:19:13.702 "small_bufsize": 8192, 00:19:13.702 "large_bufsize": 135168 00:19:13.702 } 00:19:13.702 } 00:19:13.702 ] 00:19:13.702 }, 00:19:13.702 { 00:19:13.702 "subsystem": "sock", 00:19:13.702 "config": [ 00:19:13.702 { 00:19:13.702 "method": "sock_set_default_impl", 00:19:13.702 "params": { 00:19:13.702 "impl_name": "posix" 00:19:13.702 } 00:19:13.702 }, 00:19:13.702 { 00:19:13.702 "method": "sock_impl_set_options", 00:19:13.702 "params": { 00:19:13.702 "impl_name": "ssl", 00:19:13.702 "recv_buf_size": 4096, 00:19:13.702 "send_buf_size": 4096, 00:19:13.702 "enable_recv_pipe": true, 00:19:13.702 "enable_quickack": false, 00:19:13.702 "enable_placement_id": 0, 00:19:13.702 "enable_zerocopy_send_server": true, 00:19:13.702 "enable_zerocopy_send_client": false, 00:19:13.702 "zerocopy_threshold": 0, 00:19:13.702 "tls_version": 0, 00:19:13.702 "enable_ktls": false 00:19:13.702 } 00:19:13.702 }, 00:19:13.702 { 00:19:13.702 "method": "sock_impl_set_options", 00:19:13.702 "params": { 00:19:13.702 "impl_name": "posix", 00:19:13.702 "recv_buf_size": 2097152, 00:19:13.702 "send_buf_size": 2097152, 00:19:13.702 "enable_recv_pipe": true, 00:19:13.702 "enable_quickack": false, 00:19:13.702 "enable_placement_id": 0, 00:19:13.702 "enable_zerocopy_send_server": true, 00:19:13.702 "enable_zerocopy_send_client": false, 00:19:13.702 "zerocopy_threshold": 0, 00:19:13.702 "tls_version": 0, 00:19:13.702 "enable_ktls": false 00:19:13.702 } 00:19:13.702 } 00:19:13.702 ] 00:19:13.702 }, 00:19:13.702 { 00:19:13.702 "subsystem": "vmd", 00:19:13.702 "config": [] 00:19:13.702 }, 00:19:13.702 { 00:19:13.702 "subsystem": "accel", 00:19:13.702 "config": [ 00:19:13.702 { 00:19:13.702 "method": "accel_set_options", 00:19:13.702 "params": { 00:19:13.702 "small_cache_size": 128, 00:19:13.702 "large_cache_size": 16, 00:19:13.703 "task_count": 2048, 00:19:13.703 "sequence_count": 2048, 00:19:13.703 "buf_count": 2048 00:19:13.703 } 00:19:13.703 } 00:19:13.703 ] 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "subsystem": "bdev", 00:19:13.703 "config": [ 00:19:13.703 { 00:19:13.703 "method": "bdev_set_options", 00:19:13.703 "params": { 00:19:13.703 "bdev_io_pool_size": 65535, 00:19:13.703 "bdev_io_cache_size": 256, 00:19:13.703 "bdev_auto_examine": true, 00:19:13.703 "iobuf_small_cache_size": 128, 00:19:13.703 "iobuf_large_cache_size": 16 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "bdev_raid_set_options", 00:19:13.703 "params": { 00:19:13.703 "process_window_size_kb": 1024, 00:19:13.703 "process_max_bandwidth_mb_sec": 0 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "bdev_iscsi_set_options", 00:19:13.703 "params": { 00:19:13.703 "timeout_sec": 30 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "bdev_nvme_set_options", 00:19:13.703 "params": { 00:19:13.703 "action_on_timeout": "none", 00:19:13.703 "timeout_us": 0, 00:19:13.703 "timeout_admin_us": 0, 00:19:13.703 "keep_alive_timeout_ms": 10000, 00:19:13.703 "arbitration_burst": 0, 00:19:13.703 "low_priority_weight": 0, 00:19:13.703 "medium_priority_weight": 0, 00:19:13.703 "high_priority_weight": 0, 00:19:13.703 "nvme_adminq_poll_period_us": 10000, 00:19:13.703 "nvme_ioq_poll_period_us": 0, 00:19:13.703 "io_queue_requests": 0, 00:19:13.703 "delay_cmd_submit": true, 00:19:13.703 "transport_retry_count": 4, 00:19:13.703 "bdev_retry_count": 3, 00:19:13.703 "transport_ack_timeout": 0, 00:19:13.703 "ctrlr_loss_timeout_sec": 0, 00:19:13.703 "reconnect_delay_sec": 0, 00:19:13.703 "fast_io_fail_timeout_sec": 0, 00:19:13.703 "disable_auto_failback": false, 00:19:13.703 "generate_uuids": false, 00:19:13.703 "transport_tos": 0, 00:19:13.703 "nvme_error_stat": false, 00:19:13.703 "rdma_srq_size": 0, 00:19:13.703 "io_path_stat": false, 00:19:13.703 "allow_accel_sequence": false, 00:19:13.703 "rdma_max_cq_size": 0, 00:19:13.703 "rdma_cm_event_timeout_ms": 0, 00:19:13.703 "dhchap_digests": [ 00:19:13.703 "sha256", 00:19:13.703 "sha384", 00:19:13.703 "sha512" 00:19:13.703 ], 00:19:13.703 "dhchap_dhgroups": [ 00:19:13.703 "null", 00:19:13.703 "ffdhe2048", 00:19:13.703 "ffdhe3072", 00:19:13.703 "ffdhe4096", 00:19:13.703 "ffdhe6144", 00:19:13.703 "ffdhe8192" 00:19:13.703 ] 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "bdev_nvme_set_hotplug", 00:19:13.703 "params": { 00:19:13.703 "period_us": 100000, 00:19:13.703 "enable": false 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "bdev_malloc_create", 00:19:13.703 "params": { 00:19:13.703 "name": "malloc0", 00:19:13.703 "num_blocks": 8192, 00:19:13.703 "block_size": 4096, 00:19:13.703 "physical_block_size": 4096, 00:19:13.703 "uuid": "2375b06f-1dcd-40af-8da6-53370c73ad13", 00:19:13.703 "optimal_io_boundary": 0, 00:19:13.703 "md_size": 0, 00:19:13.703 "dif_type": 0, 00:19:13.703 "dif_is_head_of_md": false, 00:19:13.703 "dif_pi_format": 0 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "bdev_wait_for_examine" 00:19:13.703 } 00:19:13.703 ] 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "subsystem": "nbd", 00:19:13.703 "config": [] 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "subsystem": "scheduler", 00:19:13.703 "config": [ 00:19:13.703 { 00:19:13.703 "method": "framework_set_scheduler", 00:19:13.703 "params": { 00:19:13.703 "name": "static" 00:19:13.703 } 00:19:13.703 } 00:19:13.703 ] 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "subsystem": "nvmf", 00:19:13.703 "config": [ 00:19:13.703 { 00:19:13.703 "method": "nvmf_set_config", 00:19:13.703 "params": { 00:19:13.703 "discovery_filter": "match_any", 00:19:13.703 "admin_cmd_passthru": { 00:19:13.703 "identify_ctrlr": false 00:19:13.703 }, 00:19:13.703 "dhchap_digests": [ 00:19:13.703 "sha256", 00:19:13.703 "sha384", 00:19:13.703 "sha512" 00:19:13.703 ], 00:19:13.703 "dhchap_dhgroups": [ 00:19:13.703 "null", 00:19:13.703 "ffdhe2048", 00:19:13.703 "ffdhe3072", 00:19:13.703 "ffdhe4096", 00:19:13.703 "ffdhe6144", 00:19:13.703 "ffdhe8192" 00:19:13.703 ] 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "nvmf_set_max_subsystems", 00:19:13.703 "params": { 00:19:13.703 "max_subsystems": 1024 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "nvmf_set_crdt", 00:19:13.703 "params": { 00:19:13.703 "crdt1": 0, 00:19:13.703 "crdt2": 0, 00:19:13.703 "crdt3": 0 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "nvmf_create_transport", 00:19:13.703 "params": { 00:19:13.703 "trtype": "TCP", 00:19:13.703 "max_queue_depth": 128, 00:19:13.703 "max_io_qpairs_per_ctrlr": 127, 00:19:13.703 "in_capsule_data_size": 4096, 00:19:13.703 "max_io_size": 131072, 00:19:13.703 "io_unit_size": 131072, 00:19:13.703 "max_aq_depth": 128, 00:19:13.703 "num_shared_buffers": 511, 00:19:13.703 "buf_cache_size": 4294967295, 00:19:13.703 "dif_insert_or_strip": false, 00:19:13.703 "zcopy": false, 00:19:13.703 "c2h_success": false, 00:19:13.703 "sock_priority": 0, 00:19:13.703 "abort_timeout_sec": 1, 00:19:13.703 "ack_timeout": 0, 00:19:13.703 "data_wr_pool_size": 0 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "nvmf_create_subsystem", 00:19:13.703 "params": { 00:19:13.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.703 "allow_any_host": false, 00:19:13.703 "serial_number": "SPDK00000000000001", 00:19:13.703 "model_number": "SPDK bdev Controller", 00:19:13.703 "max_namespaces": 10, 00:19:13.703 "min_cntlid": 1, 00:19:13.703 "max_cntlid": 65519, 00:19:13.703 "ana_reporting": false 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "nvmf_subsystem_add_host", 00:19:13.703 "params": { 00:19:13.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.703 "host": "nqn.2016-06.io.spdk:host1", 00:19:13.703 "psk": "key0" 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "nvmf_subsystem_add_ns", 00:19:13.703 "params": { 00:19:13.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.703 "namespace": { 00:19:13.703 "nsid": 1, 00:19:13.703 "bdev_name": "malloc0", 00:19:13.703 "nguid": "2375B06F1DCD40AF8DA653370C73AD13", 00:19:13.703 "uuid": "2375b06f-1dcd-40af-8da6-53370c73ad13", 00:19:13.703 "no_auto_visible": false 00:19:13.703 } 00:19:13.703 } 00:19:13.703 }, 00:19:13.703 { 00:19:13.703 "method": "nvmf_subsystem_add_listener", 00:19:13.703 "params": { 00:19:13.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.703 "listen_address": { 00:19:13.703 "trtype": "TCP", 00:19:13.703 "adrfam": "IPv4", 00:19:13.703 "traddr": "10.0.0.2", 00:19:13.703 "trsvcid": "4420" 00:19:13.703 }, 00:19:13.703 "secure_channel": true 00:19:13.703 } 00:19:13.703 } 00:19:13.703 ] 00:19:13.703 } 00:19:13.703 ] 00:19:13.703 }' 00:19:13.703 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:13.961 "subsystems": [ 00:19:13.961 { 00:19:13.961 "subsystem": "keyring", 00:19:13.961 "config": [ 00:19:13.961 { 00:19:13.961 "method": "keyring_file_add_key", 00:19:13.961 "params": { 00:19:13.961 "name": "key0", 00:19:13.961 "path": "/tmp/tmp.NPWhb1Ovl0" 00:19:13.961 } 00:19:13.961 } 00:19:13.961 ] 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "subsystem": "iobuf", 00:19:13.961 "config": [ 00:19:13.961 { 00:19:13.961 "method": "iobuf_set_options", 00:19:13.961 "params": { 00:19:13.961 "small_pool_count": 8192, 00:19:13.961 "large_pool_count": 1024, 00:19:13.961 "small_bufsize": 8192, 00:19:13.961 "large_bufsize": 135168 00:19:13.961 } 00:19:13.961 } 00:19:13.961 ] 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "subsystem": "sock", 00:19:13.961 "config": [ 00:19:13.961 { 00:19:13.961 "method": "sock_set_default_impl", 00:19:13.961 "params": { 00:19:13.961 "impl_name": "posix" 00:19:13.961 } 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "method": "sock_impl_set_options", 00:19:13.961 "params": { 00:19:13.961 "impl_name": "ssl", 00:19:13.961 "recv_buf_size": 4096, 00:19:13.961 "send_buf_size": 4096, 00:19:13.961 "enable_recv_pipe": true, 00:19:13.961 "enable_quickack": false, 00:19:13.961 "enable_placement_id": 0, 00:19:13.961 "enable_zerocopy_send_server": true, 00:19:13.961 "enable_zerocopy_send_client": false, 00:19:13.961 "zerocopy_threshold": 0, 00:19:13.961 "tls_version": 0, 00:19:13.961 "enable_ktls": false 00:19:13.961 } 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "method": "sock_impl_set_options", 00:19:13.961 "params": { 00:19:13.961 "impl_name": "posix", 00:19:13.961 "recv_buf_size": 2097152, 00:19:13.961 "send_buf_size": 2097152, 00:19:13.961 "enable_recv_pipe": true, 00:19:13.961 "enable_quickack": false, 00:19:13.961 "enable_placement_id": 0, 00:19:13.961 "enable_zerocopy_send_server": true, 00:19:13.961 "enable_zerocopy_send_client": false, 00:19:13.961 "zerocopy_threshold": 0, 00:19:13.961 "tls_version": 0, 00:19:13.961 "enable_ktls": false 00:19:13.961 } 00:19:13.961 } 00:19:13.961 ] 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "subsystem": "vmd", 00:19:13.961 "config": [] 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "subsystem": "accel", 00:19:13.961 "config": [ 00:19:13.961 { 00:19:13.961 "method": "accel_set_options", 00:19:13.961 "params": { 00:19:13.961 "small_cache_size": 128, 00:19:13.961 "large_cache_size": 16, 00:19:13.961 "task_count": 2048, 00:19:13.961 "sequence_count": 2048, 00:19:13.961 "buf_count": 2048 00:19:13.961 } 00:19:13.961 } 00:19:13.961 ] 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "subsystem": "bdev", 00:19:13.961 "config": [ 00:19:13.961 { 00:19:13.961 "method": "bdev_set_options", 00:19:13.961 "params": { 00:19:13.961 "bdev_io_pool_size": 65535, 00:19:13.961 "bdev_io_cache_size": 256, 00:19:13.961 "bdev_auto_examine": true, 00:19:13.961 "iobuf_small_cache_size": 128, 00:19:13.961 "iobuf_large_cache_size": 16 00:19:13.961 } 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "method": "bdev_raid_set_options", 00:19:13.961 "params": { 00:19:13.961 "process_window_size_kb": 1024, 00:19:13.961 "process_max_bandwidth_mb_sec": 0 00:19:13.961 } 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "method": "bdev_iscsi_set_options", 00:19:13.961 "params": { 00:19:13.961 "timeout_sec": 30 00:19:13.961 } 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "method": "bdev_nvme_set_options", 00:19:13.961 "params": { 00:19:13.961 "action_on_timeout": "none", 00:19:13.961 "timeout_us": 0, 00:19:13.961 "timeout_admin_us": 0, 00:19:13.961 "keep_alive_timeout_ms": 10000, 00:19:13.961 "arbitration_burst": 0, 00:19:13.961 "low_priority_weight": 0, 00:19:13.961 "medium_priority_weight": 0, 00:19:13.961 "high_priority_weight": 0, 00:19:13.961 "nvme_adminq_poll_period_us": 10000, 00:19:13.961 "nvme_ioq_poll_period_us": 0, 00:19:13.961 "io_queue_requests": 512, 00:19:13.961 "delay_cmd_submit": true, 00:19:13.961 "transport_retry_count": 4, 00:19:13.961 "bdev_retry_count": 3, 00:19:13.961 "transport_ack_timeout": 0, 00:19:13.961 "ctrlr_loss_timeout_sec": 0, 00:19:13.961 "reconnect_delay_sec": 0, 00:19:13.961 "fast_io_fail_timeout_sec": 0, 00:19:13.961 "disable_auto_failback": false, 00:19:13.961 "generate_uuids": false, 00:19:13.961 "transport_tos": 0, 00:19:13.961 "nvme_error_stat": false, 00:19:13.961 "rdma_srq_size": 0, 00:19:13.961 "io_path_stat": false, 00:19:13.961 "allow_accel_sequence": false, 00:19:13.961 "rdma_max_cq_size": 0, 00:19:13.961 "rdma_cm_event_timeout_ms": 0, 00:19:13.961 "dhchap_digests": [ 00:19:13.961 "sha256", 00:19:13.961 "sha384", 00:19:13.961 "sha512" 00:19:13.961 ], 00:19:13.961 "dhchap_dhgroups": [ 00:19:13.961 "null", 00:19:13.961 "ffdhe2048", 00:19:13.961 "ffdhe3072", 00:19:13.961 "ffdhe4096", 00:19:13.961 "ffdhe6144", 00:19:13.961 "ffdhe8192" 00:19:13.961 ] 00:19:13.961 } 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "method": "bdev_nvme_attach_controller", 00:19:13.961 "params": { 00:19:13.961 "name": "TLSTEST", 00:19:13.961 "trtype": "TCP", 00:19:13.961 "adrfam": "IPv4", 00:19:13.961 "traddr": "10.0.0.2", 00:19:13.961 "trsvcid": "4420", 00:19:13.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.961 "prchk_reftag": false, 00:19:13.961 "prchk_guard": false, 00:19:13.961 "ctrlr_loss_timeout_sec": 0, 00:19:13.961 "reconnect_delay_sec": 0, 00:19:13.961 "fast_io_fail_timeout_sec": 0, 00:19:13.961 "psk": "key0", 00:19:13.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.961 "hdgst": false, 00:19:13.961 "ddgst": false, 00:19:13.961 "multipath": "multipath" 00:19:13.961 } 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "method": "bdev_nvme_set_hotplug", 00:19:13.961 "params": { 00:19:13.961 "period_us": 100000, 00:19:13.961 "enable": false 00:19:13.961 } 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "method": "bdev_wait_for_examine" 00:19:13.961 } 00:19:13.961 ] 00:19:13.961 }, 00:19:13.961 { 00:19:13.961 "subsystem": "nbd", 00:19:13.961 "config": [] 00:19:13.961 } 00:19:13.961 ] 00:19:13.961 }' 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1272802 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1272802 ']' 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1272802 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1272802 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1272802' 00:19:13.961 killing process with pid 1272802 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1272802 00:19:13.961 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.961 00:19:13.961 Latency(us) 00:19:13.961 [2024-10-08T16:26:32.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.961 [2024-10-08T16:26:32.248Z] =================================================================================================================== 00:19:13.961 [2024-10-08T16:26:32.248Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.961 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1272802 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1272515 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1272515 ']' 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1272515 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1272515 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1272515' 00:19:14.219 killing process with pid 1272515 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1272515 00:19:14.219 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1272515 00:19:14.478 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:14.478 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:14.478 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:14.478 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:14.478 "subsystems": [ 00:19:14.478 { 00:19:14.478 "subsystem": "keyring", 00:19:14.478 "config": [ 00:19:14.478 { 00:19:14.478 "method": "keyring_file_add_key", 00:19:14.478 "params": { 00:19:14.478 "name": "key0", 00:19:14.478 "path": "/tmp/tmp.NPWhb1Ovl0" 00:19:14.478 } 00:19:14.478 } 00:19:14.478 ] 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "subsystem": "iobuf", 00:19:14.478 "config": [ 00:19:14.478 { 00:19:14.478 "method": "iobuf_set_options", 00:19:14.478 "params": { 00:19:14.478 "small_pool_count": 8192, 00:19:14.478 "large_pool_count": 1024, 00:19:14.478 "small_bufsize": 8192, 00:19:14.478 "large_bufsize": 135168 00:19:14.478 } 00:19:14.478 } 00:19:14.478 ] 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "subsystem": "sock", 00:19:14.478 "config": [ 00:19:14.478 { 00:19:14.478 "method": "sock_set_default_impl", 00:19:14.478 "params": { 00:19:14.478 "impl_name": "posix" 00:19:14.478 } 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "method": "sock_impl_set_options", 00:19:14.478 "params": { 00:19:14.478 "impl_name": "ssl", 00:19:14.478 "recv_buf_size": 4096, 00:19:14.478 "send_buf_size": 4096, 00:19:14.478 "enable_recv_pipe": true, 00:19:14.478 "enable_quickack": false, 00:19:14.478 "enable_placement_id": 0, 00:19:14.478 "enable_zerocopy_send_server": true, 00:19:14.478 "enable_zerocopy_send_client": false, 00:19:14.478 "zerocopy_threshold": 0, 00:19:14.478 "tls_version": 0, 00:19:14.478 "enable_ktls": false 00:19:14.478 } 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "method": "sock_impl_set_options", 00:19:14.478 "params": { 00:19:14.478 "impl_name": "posix", 00:19:14.478 "recv_buf_size": 2097152, 00:19:14.478 "send_buf_size": 2097152, 00:19:14.478 "enable_recv_pipe": true, 00:19:14.478 "enable_quickack": false, 00:19:14.478 "enable_placement_id": 0, 00:19:14.478 "enable_zerocopy_send_server": true, 00:19:14.478 "enable_zerocopy_send_client": false, 00:19:14.478 "zerocopy_threshold": 0, 00:19:14.478 "tls_version": 0, 00:19:14.478 "enable_ktls": false 00:19:14.478 } 00:19:14.478 } 00:19:14.478 ] 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "subsystem": "vmd", 00:19:14.478 "config": [] 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "subsystem": "accel", 00:19:14.478 "config": [ 00:19:14.478 { 00:19:14.478 "method": "accel_set_options", 00:19:14.478 "params": { 00:19:14.478 "small_cache_size": 128, 00:19:14.478 "large_cache_size": 16, 00:19:14.478 "task_count": 2048, 00:19:14.478 "sequence_count": 2048, 00:19:14.478 "buf_count": 2048 00:19:14.478 } 00:19:14.478 } 00:19:14.478 ] 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "subsystem": "bdev", 00:19:14.478 "config": [ 00:19:14.478 { 00:19:14.478 "method": "bdev_set_options", 00:19:14.478 "params": { 00:19:14.478 "bdev_io_pool_size": 65535, 00:19:14.478 "bdev_io_cache_size": 256, 00:19:14.478 "bdev_auto_examine": true, 00:19:14.478 "iobuf_small_cache_size": 128, 00:19:14.478 "iobuf_large_cache_size": 16 00:19:14.478 } 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "method": "bdev_raid_set_options", 00:19:14.478 "params": { 00:19:14.478 "process_window_size_kb": 1024, 00:19:14.478 "process_max_bandwidth_mb_sec": 0 00:19:14.478 } 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "method": "bdev_iscsi_set_options", 00:19:14.478 "params": { 00:19:14.478 "timeout_sec": 30 00:19:14.478 } 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "method": "bdev_nvme_set_options", 00:19:14.478 "params": { 00:19:14.478 "action_on_timeout": "none", 00:19:14.478 "timeout_us": 0, 00:19:14.478 "timeout_admin_us": 0, 00:19:14.478 "keep_alive_timeout_ms": 10000, 00:19:14.478 "arbitration_burst": 0, 00:19:14.478 "low_priority_weight": 0, 00:19:14.478 "medium_priority_weight": 0, 00:19:14.478 "high_priority_weight": 0, 00:19:14.478 "nvme_adminq_poll_period_us": 10000, 00:19:14.478 "nvme_ioq_poll_period_us": 0, 00:19:14.478 "io_queue_requests": 0, 00:19:14.478 "delay_cmd_submit": true, 00:19:14.478 "transport_retry_count": 4, 00:19:14.478 "bdev_retry_count": 3, 00:19:14.478 "transport_ack_timeout": 0, 00:19:14.478 "ctrlr_loss_timeout_sec": 0, 00:19:14.478 "reconnect_delay_sec": 0, 00:19:14.478 "fast_io_fail_timeout_sec": 0, 00:19:14.478 "disable_auto_failback": false, 00:19:14.478 "generate_uuids": false, 00:19:14.478 "transport_tos": 0, 00:19:14.478 "nvme_error_stat": false, 00:19:14.478 "rdma_srq_size": 0, 00:19:14.478 "io_path_stat": false, 00:19:14.478 "allow_accel_sequence": false, 00:19:14.478 "rdma_max_cq_size": 0, 00:19:14.478 "rdma_cm_event_timeout_ms": 0, 00:19:14.478 "dhchap_digests": [ 00:19:14.478 "sha256", 00:19:14.478 "sha384", 00:19:14.478 "sha512" 00:19:14.478 ], 00:19:14.478 "dhchap_dhgroups": [ 00:19:14.478 "null", 00:19:14.478 "ffdhe2048", 00:19:14.478 "ffdhe3072", 00:19:14.478 "ffdhe4096", 00:19:14.478 "ffdhe6144", 00:19:14.478 "ffdhe8192" 00:19:14.478 ] 00:19:14.478 } 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "method": "bdev_nvme_set_hotplug", 00:19:14.478 "params": { 00:19:14.478 "period_us": 100000, 00:19:14.478 "enable": false 00:19:14.478 } 00:19:14.478 }, 00:19:14.478 { 00:19:14.478 "method": "bdev_malloc_create", 00:19:14.478 "params": { 00:19:14.478 "name": "malloc0", 00:19:14.478 "num_blocks": 8192, 00:19:14.478 "block_size": 4096, 00:19:14.478 "physical_block_size": 4096, 00:19:14.479 "uuid": "2375b06f-1dcd-40af-8da6-53370c73ad13", 00:19:14.479 "optimal_io_boundary": 0, 00:19:14.479 "md_size": 0, 00:19:14.479 "dif_type": 0, 00:19:14.479 "dif_is_head_of_md": false, 00:19:14.479 "dif_pi_format": 0 00:19:14.479 } 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "method": "bdev_wait_for_examine" 00:19:14.479 } 00:19:14.479 ] 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "subsystem": "nbd", 00:19:14.479 "config": [] 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "subsystem": "scheduler", 00:19:14.479 "config": [ 00:19:14.479 { 00:19:14.479 "method": "framework_set_scheduler", 00:19:14.479 "params": { 00:19:14.479 "name": "static" 00:19:14.479 } 00:19:14.479 } 00:19:14.479 ] 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "subsystem": "nvmf", 00:19:14.479 "config": [ 00:19:14.479 { 00:19:14.479 "method": "nvmf_set_config", 00:19:14.479 "params": { 00:19:14.479 "discovery_filter": "match_any", 00:19:14.479 "admin_cmd_passthru": { 00:19:14.479 "identify_ctrlr": false 00:19:14.479 }, 00:19:14.479 "dhchap_digests": [ 00:19:14.479 "sha256", 00:19:14.479 "sha384", 00:19:14.479 "sha512" 00:19:14.479 ], 00:19:14.479 "dhchap_dhgroups": [ 00:19:14.479 "null", 00:19:14.479 "ffdhe2048", 00:19:14.479 "ffdhe3072", 00:19:14.479 "ffdhe4096", 00:19:14.479 "ffdhe6144", 00:19:14.479 "ffdhe8192" 00:19:14.479 ] 00:19:14.479 } 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "method": "nvmf_set_max_subsystems", 00:19:14.479 "params": { 00:19:14.479 "max_subsystems": 1024 00:19:14.479 } 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "method": "nvmf_set_crdt", 00:19:14.479 "params": { 00:19:14.479 "crdt1": 0, 00:19:14.479 "crdt2": 0, 00:19:14.479 "crdt3": 0 00:19:14.479 } 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "method": "nvmf_create_transport", 00:19:14.479 "params": { 00:19:14.479 "trtype": "TCP", 00:19:14.479 "max_queue_depth": 128, 00:19:14.479 "max_io_qpairs_per_ctrlr": 127, 00:19:14.479 "in_capsule_data_size": 4096, 00:19:14.479 "max_io_size": 131072, 00:19:14.479 "io_unit_size": 131072, 00:19:14.479 "max_aq_depth": 128, 00:19:14.479 "num_shared_buffers": 511, 00:19:14.479 "buf_cache_size": 4294967295, 00:19:14.479 "dif_insert_or_strip": false, 00:19:14.479 "zcopy": false, 00:19:14.479 "c2h_success": false, 00:19:14.479 "sock_priority": 0, 00:19:14.479 "abort_timeout_sec": 1, 00:19:14.479 "ack_timeout": 0, 00:19:14.479 "data_wr_pool_size": 0 00:19:14.479 } 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "method": "nvmf_create_subsystem", 00:19:14.479 "params": { 00:19:14.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.479 "allow_any_host": false, 00:19:14.479 "serial_number": "SPDK00000000000001", 00:19:14.479 "model_number": "SPDK bdev Controller", 00:19:14.479 "max_namespaces": 10, 00:19:14.479 "min_cntlid": 1, 00:19:14.479 "max_cntlid": 65519, 00:19:14.479 "ana_reporting": false 00:19:14.479 } 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "method": "nvmf_subsystem_add_host", 00:19:14.479 "params": { 00:19:14.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.479 "host": "nqn.2016-06.io.spdk:host1", 00:19:14.479 "psk": "key0" 00:19:14.479 } 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "method": "nvmf_subsystem_add_ns", 00:19:14.479 "params": { 00:19:14.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.479 "namespace": { 00:19:14.479 "nsid": 1, 00:19:14.479 "bdev_name": "malloc0", 00:19:14.479 "nguid": "2375B06F1DCD40AF8DA653370C73AD13", 00:19:14.479 "uuid": "2375b06f-1dcd-40af-8da6-53370c73ad13", 00:19:14.479 "no_auto_visible": false 00:19:14.479 } 00:19:14.479 } 00:19:14.479 }, 00:19:14.479 { 00:19:14.479 "method": "nvmf_subsystem_add_listener", 00:19:14.479 "params": { 00:19:14.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.479 "listen_address": { 00:19:14.479 "trtype": "TCP", 00:19:14.479 "adrfam": "IPv4", 00:19:14.479 "traddr": "10.0.0.2", 00:19:14.479 "trsvcid": "4420" 00:19:14.479 }, 00:19:14.479 "secure_channel": true 00:19:14.479 } 00:19:14.479 } 00:19:14.479 ] 00:19:14.479 } 00:19:14.479 ] 00:19:14.479 }' 00:19:14.479 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.479 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1273083 00:19:14.479 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:14.479 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1273083 00:19:14.479 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1273083 ']' 00:19:14.479 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.479 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.479 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.479 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.479 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.737 [2024-10-08 18:26:32.779299] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:14.737 [2024-10-08 18:26:32.779374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.737 [2024-10-08 18:26:32.843114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.737 [2024-10-08 18:26:32.955499] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.737 [2024-10-08 18:26:32.955557] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.738 [2024-10-08 18:26:32.955587] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.738 [2024-10-08 18:26:32.955599] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.738 [2024-10-08 18:26:32.955609] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.738 [2024-10-08 18:26:32.956294] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.994 [2024-10-08 18:26:33.204517] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:14.994 [2024-10-08 18:26:33.236539] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:14.994 [2024-10-08 18:26:33.236817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1273233 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1273233 /var/tmp/bdevperf.sock 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1273233 ']' 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.560 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:15.560 "subsystems": [ 00:19:15.560 { 00:19:15.560 "subsystem": "keyring", 00:19:15.560 "config": [ 00:19:15.560 { 00:19:15.560 "method": "keyring_file_add_key", 00:19:15.560 "params": { 00:19:15.560 "name": "key0", 00:19:15.560 "path": "/tmp/tmp.NPWhb1Ovl0" 00:19:15.560 } 00:19:15.560 } 00:19:15.560 ] 00:19:15.560 }, 00:19:15.560 { 00:19:15.560 "subsystem": "iobuf", 00:19:15.560 "config": [ 00:19:15.560 { 00:19:15.560 "method": "iobuf_set_options", 00:19:15.560 "params": { 00:19:15.560 "small_pool_count": 8192, 00:19:15.560 "large_pool_count": 1024, 00:19:15.560 "small_bufsize": 8192, 00:19:15.560 "large_bufsize": 135168 00:19:15.560 } 00:19:15.560 } 00:19:15.560 ] 00:19:15.560 }, 00:19:15.560 { 00:19:15.560 "subsystem": "sock", 00:19:15.560 "config": [ 00:19:15.560 { 00:19:15.560 "method": "sock_set_default_impl", 00:19:15.560 "params": { 00:19:15.560 "impl_name": "posix" 00:19:15.560 } 00:19:15.560 }, 00:19:15.560 { 00:19:15.560 "method": "sock_impl_set_options", 00:19:15.560 "params": { 00:19:15.560 "impl_name": "ssl", 00:19:15.560 "recv_buf_size": 4096, 00:19:15.560 "send_buf_size": 4096, 00:19:15.560 "enable_recv_pipe": true, 00:19:15.560 "enable_quickack": false, 00:19:15.560 "enable_placement_id": 0, 00:19:15.560 "enable_zerocopy_send_server": true, 00:19:15.560 "enable_zerocopy_send_client": false, 00:19:15.560 "zerocopy_threshold": 0, 00:19:15.560 "tls_version": 0, 00:19:15.560 "enable_ktls": false 00:19:15.560 } 00:19:15.560 }, 00:19:15.560 { 00:19:15.560 "method": "sock_impl_set_options", 00:19:15.560 "params": { 00:19:15.560 "impl_name": "posix", 00:19:15.560 "recv_buf_size": 2097152, 00:19:15.560 "send_buf_size": 2097152, 00:19:15.560 "enable_recv_pipe": true, 00:19:15.560 "enable_quickack": false, 00:19:15.560 "enable_placement_id": 0, 00:19:15.560 "enable_zerocopy_send_server": true, 00:19:15.560 "enable_zerocopy_send_client": false, 00:19:15.560 "zerocopy_threshold": 0, 00:19:15.560 "tls_version": 0, 00:19:15.560 "enable_ktls": false 00:19:15.560 } 00:19:15.560 } 00:19:15.560 ] 00:19:15.560 }, 00:19:15.560 { 00:19:15.560 "subsystem": "vmd", 00:19:15.560 "config": [] 00:19:15.560 }, 00:19:15.560 { 00:19:15.560 "subsystem": "accel", 00:19:15.560 "config": [ 00:19:15.560 { 00:19:15.560 "method": "accel_set_options", 00:19:15.560 "params": { 00:19:15.560 "small_cache_size": 128, 00:19:15.560 "large_cache_size": 16, 00:19:15.560 "task_count": 2048, 00:19:15.560 "sequence_count": 2048, 00:19:15.560 "buf_count": 2048 00:19:15.560 } 00:19:15.560 } 00:19:15.560 ] 00:19:15.560 }, 00:19:15.560 { 00:19:15.560 "subsystem": "bdev", 00:19:15.560 "config": [ 00:19:15.560 { 00:19:15.560 "method": "bdev_set_options", 00:19:15.560 "params": { 00:19:15.560 "bdev_io_pool_size": 65535, 00:19:15.560 "bdev_io_cache_size": 256, 00:19:15.560 "bdev_auto_examine": true, 00:19:15.560 "iobuf_small_cache_size": 128, 00:19:15.560 "iobuf_large_cache_size": 16 00:19:15.560 } 00:19:15.560 }, 00:19:15.560 { 00:19:15.560 "method": "bdev_raid_set_options", 00:19:15.560 "params": { 00:19:15.560 "process_window_size_kb": 1024, 00:19:15.560 "process_max_bandwidth_mb_sec": 0 00:19:15.560 } 00:19:15.560 }, 00:19:15.560 { 00:19:15.560 "method": "bdev_iscsi_set_options", 00:19:15.560 "params": { 00:19:15.560 "timeout_sec": 30 00:19:15.560 } 00:19:15.560 }, 00:19:15.560 { 00:19:15.560 "method": "bdev_nvme_set_options", 00:19:15.560 "params": { 00:19:15.560 "action_on_timeout": "none", 00:19:15.560 "timeout_us": 0, 00:19:15.560 "timeout_admin_us": 0, 00:19:15.560 "keep_alive_timeout_ms": 10000, 00:19:15.560 "arbitration_burst": 0, 00:19:15.561 "low_priority_weight": 0, 00:19:15.561 "medium_priority_weight": 0, 00:19:15.561 "high_priority_weight": 0, 00:19:15.561 "nvme_adminq_poll_period_us": 10000, 00:19:15.561 "nvme_ioq_poll_period_us": 0, 00:19:15.561 "io_queue_requests": 512, 00:19:15.561 "delay_cmd_submit": true, 00:19:15.561 "transport_retry_count": 4, 00:19:15.561 "bdev_retry_count": 3, 00:19:15.561 "transport_ack_timeout": 0, 00:19:15.561 "ctrlr_loss_timeout_sec": 0, 00:19:15.561 "reconnect_delay_sec": 0, 00:19:15.561 "fast_io_fail_timeout_sec": 0, 00:19:15.561 "disable_auto_failback": false, 00:19:15.561 "generate_uuids": false, 00:19:15.561 "transport_tos": 0, 00:19:15.561 "nvme_error_stat": false, 00:19:15.561 "rdma_srq_size": 0, 00:19:15.561 "io_path_stat": false, 00:19:15.561 "allow_accel_sequence": false, 00:19:15.561 "rdma_max_cq_size": 0, 00:19:15.561 "rdma_cm_event_timeout_ms": 0, 00:19:15.561 "dhchap_digests": [ 00:19:15.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.561 "sha256", 00:19:15.561 "sha384", 00:19:15.561 "sha512" 00:19:15.561 ], 00:19:15.561 "dhchap_dhgroups": [ 00:19:15.561 "null", 00:19:15.561 "ffdhe2048", 00:19:15.561 "ffdhe3072", 00:19:15.561 "ffdhe4096", 00:19:15.561 "ffdhe6144", 00:19:15.561 "ffdhe8192" 00:19:15.561 ] 00:19:15.561 } 00:19:15.561 }, 00:19:15.561 { 00:19:15.561 "method": "bdev_nvme_attach_controller", 00:19:15.561 "params": { 00:19:15.561 "name": "TLSTEST", 00:19:15.561 "trtype": "TCP", 00:19:15.561 "adrfam": "IPv4", 00:19:15.561 "traddr": "10.0.0.2", 00:19:15.561 "trsvcid": "4420", 00:19:15.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.561 "prchk_reftag": false, 00:19:15.561 "prchk_guard": false, 00:19:15.561 "ctrlr_loss_timeout_sec": 0, 00:19:15.561 "reconnect_delay_sec": 0, 00:19:15.561 "fast_io_fail_timeout_sec": 0, 00:19:15.561 "psk": "key0", 00:19:15.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.561 "hdgst": false, 00:19:15.561 "ddgst": false, 00:19:15.561 "multipath": "multipath" 00:19:15.561 } 00:19:15.561 }, 00:19:15.561 { 00:19:15.561 "method": "bdev_nvme_set_hotplug", 00:19:15.561 "params": { 00:19:15.561 "period_us": 100000, 00:19:15.561 "enable": false 00:19:15.561 } 00:19:15.561 }, 00:19:15.561 { 00:19:15.561 "method": "bdev_wait_for_examine" 00:19:15.561 } 00:19:15.561 ] 00:19:15.561 }, 00:19:15.561 { 00:19:15.561 "subsystem": "nbd", 00:19:15.561 "config": [] 00:19:15.561 } 00:19:15.561 ] 00:19:15.561 }' 00:19:15.561 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.561 18:26:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.561 [2024-10-08 18:26:33.844375] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:15.561 [2024-10-08 18:26:33.844453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273233 ] 00:19:15.819 [2024-10-08 18:26:33.900585] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.819 [2024-10-08 18:26:34.009729] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.077 [2024-10-08 18:26:34.181857] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.642 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:16.642 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:16.642 18:26:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:16.900 Running I/O for 10 seconds... 00:19:18.765 3315.00 IOPS, 12.95 MiB/s [2024-10-08T16:26:37.985Z] 3344.50 IOPS, 13.06 MiB/s [2024-10-08T16:26:39.356Z] 3376.33 IOPS, 13.19 MiB/s [2024-10-08T16:26:40.288Z] 3382.00 IOPS, 13.21 MiB/s [2024-10-08T16:26:41.221Z] 3369.80 IOPS, 13.16 MiB/s [2024-10-08T16:26:42.157Z] 3366.83 IOPS, 13.15 MiB/s [2024-10-08T16:26:43.094Z] 3377.71 IOPS, 13.19 MiB/s [2024-10-08T16:26:44.027Z] 3365.12 IOPS, 13.15 MiB/s [2024-10-08T16:26:44.962Z] 3374.33 IOPS, 13.18 MiB/s [2024-10-08T16:26:45.220Z] 3380.30 IOPS, 13.20 MiB/s 00:19:26.933 Latency(us) 00:19:26.933 [2024-10-08T16:26:45.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.933 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.933 Verification LBA range: start 0x0 length 0x2000 00:19:26.933 TLSTESTn1 : 10.03 3382.83 13.21 0.00 0.00 37758.25 6213.78 38447.79 00:19:26.933 [2024-10-08T16:26:45.220Z] =================================================================================================================== 00:19:26.933 [2024-10-08T16:26:45.220Z] Total : 3382.83 13.21 0.00 0.00 37758.25 6213.78 38447.79 00:19:26.933 { 00:19:26.933 "results": [ 00:19:26.933 { 00:19:26.933 "job": "TLSTESTn1", 00:19:26.933 "core_mask": "0x4", 00:19:26.933 "workload": "verify", 00:19:26.933 "status": "finished", 00:19:26.933 "verify_range": { 00:19:26.933 "start": 0, 00:19:26.933 "length": 8192 00:19:26.933 }, 00:19:26.933 "queue_depth": 128, 00:19:26.933 "io_size": 4096, 00:19:26.933 "runtime": 10.029763, 00:19:26.933 "iops": 3382.831678076541, 00:19:26.933 "mibps": 13.214186242486488, 00:19:26.933 "io_failed": 0, 00:19:26.933 "io_timeout": 0, 00:19:26.933 "avg_latency_us": 37758.24730222043, 00:19:26.933 "min_latency_us": 6213.783703703703, 00:19:26.933 "max_latency_us": 38447.78666666667 00:19:26.933 } 00:19:26.933 ], 00:19:26.933 "core_count": 1 00:19:26.933 } 00:19:26.933 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:26.933 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1273233 00:19:26.933 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1273233 ']' 00:19:26.933 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1273233 00:19:26.933 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:26.933 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.933 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1273233 00:19:26.933 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:26.933 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:26.933 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1273233' 00:19:26.933 killing process with pid 1273233 00:19:26.933 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1273233 00:19:26.933 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.933 00:19:26.933 Latency(us) 00:19:26.933 [2024-10-08T16:26:45.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.933 [2024-10-08T16:26:45.220Z] =================================================================================================================== 00:19:26.933 [2024-10-08T16:26:45.220Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.933 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1273233 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1273083 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1273083 ']' 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1273083 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1273083 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1273083' 00:19:27.191 killing process with pid 1273083 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1273083 00:19:27.191 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1273083 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1274565 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1274565 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1274565 ']' 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.449 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.449 [2024-10-08 18:26:45.696210] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:27.449 [2024-10-08 18:26:45.696309] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.707 [2024-10-08 18:26:45.758192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.707 [2024-10-08 18:26:45.862257] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.707 [2024-10-08 18:26:45.862335] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.707 [2024-10-08 18:26:45.862349] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.707 [2024-10-08 18:26:45.862374] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.708 [2024-10-08 18:26:45.862383] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.708 [2024-10-08 18:26:45.862924] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.708 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:27.708 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:27.708 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:27.708 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:27.708 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.965 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.965 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.NPWhb1Ovl0 00:19:27.965 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NPWhb1Ovl0 00:19:27.966 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.224 [2024-10-08 18:26:46.255904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.224 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:28.482 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:28.741 [2024-10-08 18:26:46.789359] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.741 [2024-10-08 18:26:46.789678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.741 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:28.999 malloc0 00:19:28.999 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.257 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NPWhb1Ovl0 00:19:29.516 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.774 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1274855 00:19:29.774 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.774 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1274855 /var/tmp/bdevperf.sock 00:19:29.774 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1274855 ']' 00:19:29.774 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.774 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.774 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.774 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:29.774 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.774 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.774 [2024-10-08 18:26:47.944535] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:29.774 [2024-10-08 18:26:47.944627] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274855 ] 00:19:29.774 [2024-10-08 18:26:48.003045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.032 [2024-10-08 18:26:48.114380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.032 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.032 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:30.032 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NPWhb1Ovl0 00:19:30.288 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:30.546 [2024-10-08 18:26:48.748622] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.546 nvme0n1 00:19:30.803 18:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:30.803 Running I/O for 1 seconds... 00:19:31.736 3322.00 IOPS, 12.98 MiB/s 00:19:31.736 Latency(us) 00:19:31.736 [2024-10-08T16:26:50.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.736 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:31.736 Verification LBA range: start 0x0 length 0x2000 00:19:31.736 nvme0n1 : 1.03 3338.13 13.04 0.00 0.00 37814.73 9320.68 33981.63 00:19:31.736 [2024-10-08T16:26:50.023Z] =================================================================================================================== 00:19:31.736 [2024-10-08T16:26:50.023Z] Total : 3338.13 13.04 0.00 0.00 37814.73 9320.68 33981.63 00:19:31.736 { 00:19:31.737 "results": [ 00:19:31.737 { 00:19:31.737 "job": "nvme0n1", 00:19:31.737 "core_mask": "0x2", 00:19:31.737 "workload": "verify", 00:19:31.737 "status": "finished", 00:19:31.737 "verify_range": { 00:19:31.737 "start": 0, 00:19:31.737 "length": 8192 00:19:31.737 }, 00:19:31.737 "queue_depth": 128, 00:19:31.737 "io_size": 4096, 00:19:31.737 "runtime": 1.033513, 00:19:31.737 "iops": 3338.1292736520973, 00:19:31.737 "mibps": 13.039567475203505, 00:19:31.737 "io_failed": 0, 00:19:31.737 "io_timeout": 0, 00:19:31.737 "avg_latency_us": 37814.72930971551, 00:19:31.737 "min_latency_us": 9320.675555555556, 00:19:31.737 "max_latency_us": 33981.62962962963 00:19:31.737 } 00:19:31.737 ], 00:19:31.737 "core_count": 1 00:19:31.737 } 00:19:31.737 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1274855 00:19:31.737 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1274855 ']' 00:19:31.737 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1274855 00:19:31.737 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:31.737 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.737 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1274855 00:19:31.994 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:31.994 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:31.994 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1274855' 00:19:31.994 killing process with pid 1274855 00:19:31.994 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1274855 00:19:31.994 Received shutdown signal, test time was about 1.000000 seconds 00:19:31.994 00:19:31.994 Latency(us) 00:19:31.994 [2024-10-08T16:26:50.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.994 [2024-10-08T16:26:50.281Z] =================================================================================================================== 00:19:31.994 [2024-10-08T16:26:50.281Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.994 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1274855 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1274565 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1274565 ']' 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1274565 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1274565 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1274565' 00:19:32.252 killing process with pid 1274565 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1274565 00:19:32.252 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1274565 00:19:32.514 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1275255 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1275255 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1275255 ']' 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.515 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.515 [2024-10-08 18:26:50.718714] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:32.515 [2024-10-08 18:26:50.718804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.515 [2024-10-08 18:26:50.780507] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.773 [2024-10-08 18:26:50.880673] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.773 [2024-10-08 18:26:50.880736] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.773 [2024-10-08 18:26:50.880764] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.773 [2024-10-08 18:26:50.880774] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.773 [2024-10-08 18:26:50.880783] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.773 [2024-10-08 18:26:50.881339] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.773 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.773 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:32.773 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:32.773 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.773 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.773 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.773 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:32.773 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.773 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.773 [2024-10-08 18:26:51.028627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.773 malloc0 00:19:33.031 [2024-10-08 18:26:51.070201] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.031 [2024-10-08 18:26:51.070502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1275277 00:19:33.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:33.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1275277 /var/tmp/bdevperf.sock 00:19:33.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1275277 ']' 00:19:33.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.031 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.031 [2024-10-08 18:26:51.140774] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:33.031 [2024-10-08 18:26:51.140850] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275277 ] 00:19:33.031 [2024-10-08 18:26:51.199217] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.031 [2024-10-08 18:26:51.306914] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.289 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.289 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:33.289 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NPWhb1Ovl0 00:19:33.547 18:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:33.805 [2024-10-08 18:26:51.947193] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.805 nvme0n1 00:19:33.805 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.062 Running I/O for 1 seconds... 00:19:35.044 3400.00 IOPS, 13.28 MiB/s 00:19:35.044 Latency(us) 00:19:35.044 [2024-10-08T16:26:53.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.044 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:35.044 Verification LBA range: start 0x0 length 0x2000 00:19:35.044 nvme0n1 : 1.02 3448.32 13.47 0.00 0.00 36770.06 6893.42 33010.73 00:19:35.044 [2024-10-08T16:26:53.331Z] =================================================================================================================== 00:19:35.044 [2024-10-08T16:26:53.331Z] Total : 3448.32 13.47 0.00 0.00 36770.06 6893.42 33010.73 00:19:35.044 { 00:19:35.044 "results": [ 00:19:35.044 { 00:19:35.044 "job": "nvme0n1", 00:19:35.044 "core_mask": "0x2", 00:19:35.044 "workload": "verify", 00:19:35.044 "status": "finished", 00:19:35.044 "verify_range": { 00:19:35.044 "start": 0, 00:19:35.044 "length": 8192 00:19:35.044 }, 00:19:35.044 "queue_depth": 128, 00:19:35.044 "io_size": 4096, 00:19:35.044 "runtime": 1.023108, 00:19:35.044 "iops": 3448.316306782862, 00:19:35.044 "mibps": 13.469985573370554, 00:19:35.044 "io_failed": 0, 00:19:35.044 "io_timeout": 0, 00:19:35.044 "avg_latency_us": 36770.06066347527, 00:19:35.044 "min_latency_us": 6893.416296296296, 00:19:35.044 "max_latency_us": 33010.72592592592 00:19:35.044 } 00:19:35.044 ], 00:19:35.044 "core_count": 1 00:19:35.044 } 00:19:35.044 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:35.044 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.044 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.044 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.044 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:35.044 "subsystems": [ 00:19:35.044 { 00:19:35.044 "subsystem": "keyring", 00:19:35.044 "config": [ 00:19:35.044 { 00:19:35.044 "method": "keyring_file_add_key", 00:19:35.044 "params": { 00:19:35.044 "name": "key0", 00:19:35.044 "path": "/tmp/tmp.NPWhb1Ovl0" 00:19:35.044 } 00:19:35.044 } 00:19:35.044 ] 00:19:35.044 }, 00:19:35.044 { 00:19:35.044 "subsystem": "iobuf", 00:19:35.044 "config": [ 00:19:35.044 { 00:19:35.044 "method": "iobuf_set_options", 00:19:35.044 "params": { 00:19:35.044 "small_pool_count": 8192, 00:19:35.044 "large_pool_count": 1024, 00:19:35.044 "small_bufsize": 8192, 00:19:35.044 "large_bufsize": 135168 00:19:35.044 } 00:19:35.044 } 00:19:35.044 ] 00:19:35.044 }, 00:19:35.044 { 00:19:35.044 "subsystem": "sock", 00:19:35.044 "config": [ 00:19:35.044 { 00:19:35.044 "method": "sock_set_default_impl", 00:19:35.044 "params": { 00:19:35.044 "impl_name": "posix" 00:19:35.044 } 00:19:35.044 }, 00:19:35.044 { 00:19:35.044 "method": "sock_impl_set_options", 00:19:35.044 "params": { 00:19:35.044 "impl_name": "ssl", 00:19:35.044 "recv_buf_size": 4096, 00:19:35.044 "send_buf_size": 4096, 00:19:35.044 "enable_recv_pipe": true, 00:19:35.044 "enable_quickack": false, 00:19:35.044 "enable_placement_id": 0, 00:19:35.044 "enable_zerocopy_send_server": true, 00:19:35.044 "enable_zerocopy_send_client": false, 00:19:35.044 "zerocopy_threshold": 0, 00:19:35.044 "tls_version": 0, 00:19:35.044 "enable_ktls": false 00:19:35.044 } 00:19:35.044 }, 00:19:35.044 { 00:19:35.044 "method": "sock_impl_set_options", 00:19:35.044 "params": { 00:19:35.044 "impl_name": "posix", 00:19:35.044 "recv_buf_size": 2097152, 00:19:35.044 "send_buf_size": 2097152, 00:19:35.044 "enable_recv_pipe": true, 00:19:35.044 "enable_quickack": false, 00:19:35.044 "enable_placement_id": 0, 00:19:35.044 "enable_zerocopy_send_server": true, 00:19:35.044 "enable_zerocopy_send_client": false, 00:19:35.044 "zerocopy_threshold": 0, 00:19:35.044 "tls_version": 0, 00:19:35.044 "enable_ktls": false 00:19:35.044 } 00:19:35.044 } 00:19:35.044 ] 00:19:35.044 }, 00:19:35.044 { 00:19:35.044 "subsystem": "vmd", 00:19:35.044 "config": [] 00:19:35.044 }, 00:19:35.044 { 00:19:35.044 "subsystem": "accel", 00:19:35.044 "config": [ 00:19:35.044 { 00:19:35.044 "method": "accel_set_options", 00:19:35.044 "params": { 00:19:35.044 "small_cache_size": 128, 00:19:35.044 "large_cache_size": 16, 00:19:35.044 "task_count": 2048, 00:19:35.044 "sequence_count": 2048, 00:19:35.044 "buf_count": 2048 00:19:35.044 } 00:19:35.044 } 00:19:35.044 ] 00:19:35.044 }, 00:19:35.044 { 00:19:35.044 "subsystem": "bdev", 00:19:35.044 "config": [ 00:19:35.044 { 00:19:35.044 "method": "bdev_set_options", 00:19:35.044 "params": { 00:19:35.044 "bdev_io_pool_size": 65535, 00:19:35.044 "bdev_io_cache_size": 256, 00:19:35.044 "bdev_auto_examine": true, 00:19:35.044 "iobuf_small_cache_size": 128, 00:19:35.044 "iobuf_large_cache_size": 16 00:19:35.044 } 00:19:35.044 }, 00:19:35.044 { 00:19:35.044 "method": "bdev_raid_set_options", 00:19:35.044 "params": { 00:19:35.044 "process_window_size_kb": 1024, 00:19:35.045 "process_max_bandwidth_mb_sec": 0 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "bdev_iscsi_set_options", 00:19:35.045 "params": { 00:19:35.045 "timeout_sec": 30 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "bdev_nvme_set_options", 00:19:35.045 "params": { 00:19:35.045 "action_on_timeout": "none", 00:19:35.045 "timeout_us": 0, 00:19:35.045 "timeout_admin_us": 0, 00:19:35.045 "keep_alive_timeout_ms": 10000, 00:19:35.045 "arbitration_burst": 0, 00:19:35.045 "low_priority_weight": 0, 00:19:35.045 "medium_priority_weight": 0, 00:19:35.045 "high_priority_weight": 0, 00:19:35.045 "nvme_adminq_poll_period_us": 10000, 00:19:35.045 "nvme_ioq_poll_period_us": 0, 00:19:35.045 "io_queue_requests": 0, 00:19:35.045 "delay_cmd_submit": true, 00:19:35.045 "transport_retry_count": 4, 00:19:35.045 "bdev_retry_count": 3, 00:19:35.045 "transport_ack_timeout": 0, 00:19:35.045 "ctrlr_loss_timeout_sec": 0, 00:19:35.045 "reconnect_delay_sec": 0, 00:19:35.045 "fast_io_fail_timeout_sec": 0, 00:19:35.045 "disable_auto_failback": false, 00:19:35.045 "generate_uuids": false, 00:19:35.045 "transport_tos": 0, 00:19:35.045 "nvme_error_stat": false, 00:19:35.045 "rdma_srq_size": 0, 00:19:35.045 "io_path_stat": false, 00:19:35.045 "allow_accel_sequence": false, 00:19:35.045 "rdma_max_cq_size": 0, 00:19:35.045 "rdma_cm_event_timeout_ms": 0, 00:19:35.045 "dhchap_digests": [ 00:19:35.045 "sha256", 00:19:35.045 "sha384", 00:19:35.045 "sha512" 00:19:35.045 ], 00:19:35.045 "dhchap_dhgroups": [ 00:19:35.045 "null", 00:19:35.045 "ffdhe2048", 00:19:35.045 "ffdhe3072", 00:19:35.045 "ffdhe4096", 00:19:35.045 "ffdhe6144", 00:19:35.045 "ffdhe8192" 00:19:35.045 ] 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "bdev_nvme_set_hotplug", 00:19:35.045 "params": { 00:19:35.045 "period_us": 100000, 00:19:35.045 "enable": false 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "bdev_malloc_create", 00:19:35.045 "params": { 00:19:35.045 "name": "malloc0", 00:19:35.045 "num_blocks": 8192, 00:19:35.045 "block_size": 4096, 00:19:35.045 "physical_block_size": 4096, 00:19:35.045 "uuid": "0dd77360-04f4-4983-807b-ac46d2b76a50", 00:19:35.045 "optimal_io_boundary": 0, 00:19:35.045 "md_size": 0, 00:19:35.045 "dif_type": 0, 00:19:35.045 "dif_is_head_of_md": false, 00:19:35.045 "dif_pi_format": 0 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "bdev_wait_for_examine" 00:19:35.045 } 00:19:35.045 ] 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "subsystem": "nbd", 00:19:35.045 "config": [] 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "subsystem": "scheduler", 00:19:35.045 "config": [ 00:19:35.045 { 00:19:35.045 "method": "framework_set_scheduler", 00:19:35.045 "params": { 00:19:35.045 "name": "static" 00:19:35.045 } 00:19:35.045 } 00:19:35.045 ] 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "subsystem": "nvmf", 00:19:35.045 "config": [ 00:19:35.045 { 00:19:35.045 "method": "nvmf_set_config", 00:19:35.045 "params": { 00:19:35.045 "discovery_filter": "match_any", 00:19:35.045 "admin_cmd_passthru": { 00:19:35.045 "identify_ctrlr": false 00:19:35.045 }, 00:19:35.045 "dhchap_digests": [ 00:19:35.045 "sha256", 00:19:35.045 "sha384", 00:19:35.045 "sha512" 00:19:35.045 ], 00:19:35.045 "dhchap_dhgroups": [ 00:19:35.045 "null", 00:19:35.045 "ffdhe2048", 00:19:35.045 "ffdhe3072", 00:19:35.045 "ffdhe4096", 00:19:35.045 "ffdhe6144", 00:19:35.045 "ffdhe8192" 00:19:35.045 ] 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "nvmf_set_max_subsystems", 00:19:35.045 "params": { 00:19:35.045 "max_subsystems": 1024 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "nvmf_set_crdt", 00:19:35.045 "params": { 00:19:35.045 "crdt1": 0, 00:19:35.045 "crdt2": 0, 00:19:35.045 "crdt3": 0 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "nvmf_create_transport", 00:19:35.045 "params": { 00:19:35.045 "trtype": "TCP", 00:19:35.045 "max_queue_depth": 128, 00:19:35.045 "max_io_qpairs_per_ctrlr": 127, 00:19:35.045 "in_capsule_data_size": 4096, 00:19:35.045 "max_io_size": 131072, 00:19:35.045 "io_unit_size": 131072, 00:19:35.045 "max_aq_depth": 128, 00:19:35.045 "num_shared_buffers": 511, 00:19:35.045 "buf_cache_size": 4294967295, 00:19:35.045 "dif_insert_or_strip": false, 00:19:35.045 "zcopy": false, 00:19:35.045 "c2h_success": false, 00:19:35.045 "sock_priority": 0, 00:19:35.045 "abort_timeout_sec": 1, 00:19:35.045 "ack_timeout": 0, 00:19:35.045 "data_wr_pool_size": 0 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "nvmf_create_subsystem", 00:19:35.045 "params": { 00:19:35.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.045 "allow_any_host": false, 00:19:35.045 "serial_number": "00000000000000000000", 00:19:35.045 "model_number": "SPDK bdev Controller", 00:19:35.045 "max_namespaces": 32, 00:19:35.045 "min_cntlid": 1, 00:19:35.045 "max_cntlid": 65519, 00:19:35.045 "ana_reporting": false 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "nvmf_subsystem_add_host", 00:19:35.045 "params": { 00:19:35.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.045 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.045 "psk": "key0" 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "nvmf_subsystem_add_ns", 00:19:35.045 "params": { 00:19:35.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.045 "namespace": { 00:19:35.045 "nsid": 1, 00:19:35.045 "bdev_name": "malloc0", 00:19:35.045 "nguid": "0DD7736004F44983807BAC46D2B76A50", 00:19:35.045 "uuid": "0dd77360-04f4-4983-807b-ac46d2b76a50", 00:19:35.045 "no_auto_visible": false 00:19:35.045 } 00:19:35.045 } 00:19:35.045 }, 00:19:35.045 { 00:19:35.045 "method": "nvmf_subsystem_add_listener", 00:19:35.045 "params": { 00:19:35.045 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.045 "listen_address": { 00:19:35.045 "trtype": "TCP", 00:19:35.045 "adrfam": "IPv4", 00:19:35.045 "traddr": "10.0.0.2", 00:19:35.045 "trsvcid": "4420" 00:19:35.045 }, 00:19:35.045 "secure_channel": false, 00:19:35.045 "sock_impl": "ssl" 00:19:35.045 } 00:19:35.045 } 00:19:35.045 ] 00:19:35.045 } 00:19:35.045 ] 00:19:35.045 }' 00:19:35.045 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:35.611 "subsystems": [ 00:19:35.611 { 00:19:35.611 "subsystem": "keyring", 00:19:35.611 "config": [ 00:19:35.611 { 00:19:35.611 "method": "keyring_file_add_key", 00:19:35.611 "params": { 00:19:35.611 "name": "key0", 00:19:35.611 "path": "/tmp/tmp.NPWhb1Ovl0" 00:19:35.611 } 00:19:35.611 } 00:19:35.611 ] 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "subsystem": "iobuf", 00:19:35.611 "config": [ 00:19:35.611 { 00:19:35.611 "method": "iobuf_set_options", 00:19:35.611 "params": { 00:19:35.611 "small_pool_count": 8192, 00:19:35.611 "large_pool_count": 1024, 00:19:35.611 "small_bufsize": 8192, 00:19:35.611 "large_bufsize": 135168 00:19:35.611 } 00:19:35.611 } 00:19:35.611 ] 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "subsystem": "sock", 00:19:35.611 "config": [ 00:19:35.611 { 00:19:35.611 "method": "sock_set_default_impl", 00:19:35.611 "params": { 00:19:35.611 "impl_name": "posix" 00:19:35.611 } 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "method": "sock_impl_set_options", 00:19:35.611 "params": { 00:19:35.611 "impl_name": "ssl", 00:19:35.611 "recv_buf_size": 4096, 00:19:35.611 "send_buf_size": 4096, 00:19:35.611 "enable_recv_pipe": true, 00:19:35.611 "enable_quickack": false, 00:19:35.611 "enable_placement_id": 0, 00:19:35.611 "enable_zerocopy_send_server": true, 00:19:35.611 "enable_zerocopy_send_client": false, 00:19:35.611 "zerocopy_threshold": 0, 00:19:35.611 "tls_version": 0, 00:19:35.611 "enable_ktls": false 00:19:35.611 } 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "method": "sock_impl_set_options", 00:19:35.611 "params": { 00:19:35.611 "impl_name": "posix", 00:19:35.611 "recv_buf_size": 2097152, 00:19:35.611 "send_buf_size": 2097152, 00:19:35.611 "enable_recv_pipe": true, 00:19:35.611 "enable_quickack": false, 00:19:35.611 "enable_placement_id": 0, 00:19:35.611 "enable_zerocopy_send_server": true, 00:19:35.611 "enable_zerocopy_send_client": false, 00:19:35.611 "zerocopy_threshold": 0, 00:19:35.611 "tls_version": 0, 00:19:35.611 "enable_ktls": false 00:19:35.611 } 00:19:35.611 } 00:19:35.611 ] 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "subsystem": "vmd", 00:19:35.611 "config": [] 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "subsystem": "accel", 00:19:35.611 "config": [ 00:19:35.611 { 00:19:35.611 "method": "accel_set_options", 00:19:35.611 "params": { 00:19:35.611 "small_cache_size": 128, 00:19:35.611 "large_cache_size": 16, 00:19:35.611 "task_count": 2048, 00:19:35.611 "sequence_count": 2048, 00:19:35.611 "buf_count": 2048 00:19:35.611 } 00:19:35.611 } 00:19:35.611 ] 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "subsystem": "bdev", 00:19:35.611 "config": [ 00:19:35.611 { 00:19:35.611 "method": "bdev_set_options", 00:19:35.611 "params": { 00:19:35.611 "bdev_io_pool_size": 65535, 00:19:35.611 "bdev_io_cache_size": 256, 00:19:35.611 "bdev_auto_examine": true, 00:19:35.611 "iobuf_small_cache_size": 128, 00:19:35.611 "iobuf_large_cache_size": 16 00:19:35.611 } 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "method": "bdev_raid_set_options", 00:19:35.611 "params": { 00:19:35.611 "process_window_size_kb": 1024, 00:19:35.611 "process_max_bandwidth_mb_sec": 0 00:19:35.611 } 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "method": "bdev_iscsi_set_options", 00:19:35.611 "params": { 00:19:35.611 "timeout_sec": 30 00:19:35.611 } 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "method": "bdev_nvme_set_options", 00:19:35.611 "params": { 00:19:35.611 "action_on_timeout": "none", 00:19:35.611 "timeout_us": 0, 00:19:35.611 "timeout_admin_us": 0, 00:19:35.611 "keep_alive_timeout_ms": 10000, 00:19:35.611 "arbitration_burst": 0, 00:19:35.611 "low_priority_weight": 0, 00:19:35.611 "medium_priority_weight": 0, 00:19:35.611 "high_priority_weight": 0, 00:19:35.611 "nvme_adminq_poll_period_us": 10000, 00:19:35.611 "nvme_ioq_poll_period_us": 0, 00:19:35.611 "io_queue_requests": 512, 00:19:35.611 "delay_cmd_submit": true, 00:19:35.611 "transport_retry_count": 4, 00:19:35.611 "bdev_retry_count": 3, 00:19:35.611 "transport_ack_timeout": 0, 00:19:35.611 "ctrlr_loss_timeout_sec": 0, 00:19:35.611 "reconnect_delay_sec": 0, 00:19:35.611 "fast_io_fail_timeout_sec": 0, 00:19:35.611 "disable_auto_failback": false, 00:19:35.611 "generate_uuids": false, 00:19:35.611 "transport_tos": 0, 00:19:35.611 "nvme_error_stat": false, 00:19:35.611 "rdma_srq_size": 0, 00:19:35.611 "io_path_stat": false, 00:19:35.611 "allow_accel_sequence": false, 00:19:35.611 "rdma_max_cq_size": 0, 00:19:35.611 "rdma_cm_event_timeout_ms": 0, 00:19:35.611 "dhchap_digests": [ 00:19:35.611 "sha256", 00:19:35.611 "sha384", 00:19:35.611 "sha512" 00:19:35.611 ], 00:19:35.611 "dhchap_dhgroups": [ 00:19:35.611 "null", 00:19:35.611 "ffdhe2048", 00:19:35.611 "ffdhe3072", 00:19:35.611 "ffdhe4096", 00:19:35.611 "ffdhe6144", 00:19:35.611 "ffdhe8192" 00:19:35.611 ] 00:19:35.611 } 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "method": "bdev_nvme_attach_controller", 00:19:35.611 "params": { 00:19:35.611 "name": "nvme0", 00:19:35.611 "trtype": "TCP", 00:19:35.611 "adrfam": "IPv4", 00:19:35.611 "traddr": "10.0.0.2", 00:19:35.611 "trsvcid": "4420", 00:19:35.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.611 "prchk_reftag": false, 00:19:35.611 "prchk_guard": false, 00:19:35.611 "ctrlr_loss_timeout_sec": 0, 00:19:35.611 "reconnect_delay_sec": 0, 00:19:35.611 "fast_io_fail_timeout_sec": 0, 00:19:35.611 "psk": "key0", 00:19:35.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.611 "hdgst": false, 00:19:35.611 "ddgst": false, 00:19:35.611 "multipath": "multipath" 00:19:35.611 } 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "method": "bdev_nvme_set_hotplug", 00:19:35.611 "params": { 00:19:35.611 "period_us": 100000, 00:19:35.611 "enable": false 00:19:35.611 } 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "method": "bdev_enable_histogram", 00:19:35.611 "params": { 00:19:35.611 "name": "nvme0n1", 00:19:35.611 "enable": true 00:19:35.611 } 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "method": "bdev_wait_for_examine" 00:19:35.611 } 00:19:35.611 ] 00:19:35.611 }, 00:19:35.611 { 00:19:35.611 "subsystem": "nbd", 00:19:35.611 "config": [] 00:19:35.611 } 00:19:35.611 ] 00:19:35.611 }' 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1275277 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1275277 ']' 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1275277 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275277 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275277' 00:19:35.611 killing process with pid 1275277 00:19:35.611 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1275277 00:19:35.611 Received shutdown signal, test time was about 1.000000 seconds 00:19:35.611 00:19:35.611 Latency(us) 00:19:35.611 [2024-10-08T16:26:53.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.611 [2024-10-08T16:26:53.898Z] =================================================================================================================== 00:19:35.611 [2024-10-08T16:26:53.899Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.612 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1275277 00:19:35.869 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1275255 00:19:35.869 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1275255 ']' 00:19:35.869 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1275255 00:19:35.869 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:35.869 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.869 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275255 00:19:35.869 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:35.869 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:35.870 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275255' 00:19:35.870 killing process with pid 1275255 00:19:35.870 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1275255 00:19:35.870 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1275255 00:19:36.128 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:36.128 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:36.128 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:36.128 "subsystems": [ 00:19:36.128 { 00:19:36.128 "subsystem": "keyring", 00:19:36.128 "config": [ 00:19:36.128 { 00:19:36.128 "method": "keyring_file_add_key", 00:19:36.128 "params": { 00:19:36.128 "name": "key0", 00:19:36.128 "path": "/tmp/tmp.NPWhb1Ovl0" 00:19:36.128 } 00:19:36.128 } 00:19:36.128 ] 00:19:36.128 }, 00:19:36.128 { 00:19:36.128 "subsystem": "iobuf", 00:19:36.128 "config": [ 00:19:36.128 { 00:19:36.128 "method": "iobuf_set_options", 00:19:36.128 "params": { 00:19:36.128 "small_pool_count": 8192, 00:19:36.128 "large_pool_count": 1024, 00:19:36.128 "small_bufsize": 8192, 00:19:36.128 "large_bufsize": 135168 00:19:36.128 } 00:19:36.128 } 00:19:36.128 ] 00:19:36.128 }, 00:19:36.128 { 00:19:36.128 "subsystem": "sock", 00:19:36.128 "config": [ 00:19:36.128 { 00:19:36.128 "method": "sock_set_default_impl", 00:19:36.128 "params": { 00:19:36.128 "impl_name": "posix" 00:19:36.128 } 00:19:36.128 }, 00:19:36.128 { 00:19:36.128 "method": "sock_impl_set_options", 00:19:36.128 "params": { 00:19:36.128 "impl_name": "ssl", 00:19:36.128 "recv_buf_size": 4096, 00:19:36.128 "send_buf_size": 4096, 00:19:36.128 "enable_recv_pipe": true, 00:19:36.128 "enable_quickack": false, 00:19:36.128 "enable_placement_id": 0, 00:19:36.128 "enable_zerocopy_send_server": true, 00:19:36.128 "enable_zerocopy_send_client": false, 00:19:36.128 "zerocopy_threshold": 0, 00:19:36.128 "tls_version": 0, 00:19:36.128 "enable_ktls": false 00:19:36.128 } 00:19:36.128 }, 00:19:36.128 { 00:19:36.128 "method": "sock_impl_set_options", 00:19:36.128 "params": { 00:19:36.128 "impl_name": "posix", 00:19:36.128 "recv_buf_size": 2097152, 00:19:36.128 "send_buf_size": 2097152, 00:19:36.128 "enable_recv_pipe": true, 00:19:36.128 "enable_quickack": false, 00:19:36.128 "enable_placement_id": 0, 00:19:36.128 "enable_zerocopy_send_server": true, 00:19:36.128 "enable_zerocopy_send_client": false, 00:19:36.128 "zerocopy_threshold": 0, 00:19:36.128 "tls_version": 0, 00:19:36.128 "enable_ktls": false 00:19:36.128 } 00:19:36.128 } 00:19:36.128 ] 00:19:36.128 }, 00:19:36.128 { 00:19:36.128 "subsystem": "vmd", 00:19:36.128 "config": [] 00:19:36.128 }, 00:19:36.128 { 00:19:36.128 "subsystem": "accel", 00:19:36.128 "config": [ 00:19:36.128 { 00:19:36.128 "method": "accel_set_options", 00:19:36.128 "params": { 00:19:36.128 "small_cache_size": 128, 00:19:36.128 "large_cache_size": 16, 00:19:36.128 "task_count": 2048, 00:19:36.128 "sequence_count": 2048, 00:19:36.128 "buf_count": 2048 00:19:36.128 } 00:19:36.128 } 00:19:36.128 ] 00:19:36.128 }, 00:19:36.128 { 00:19:36.128 "subsystem": "bdev", 00:19:36.128 "config": [ 00:19:36.128 { 00:19:36.128 "method": "bdev_set_options", 00:19:36.128 "params": { 00:19:36.128 "bdev_io_pool_size": 65535, 00:19:36.128 "bdev_io_cache_size": 256, 00:19:36.128 "bdev_auto_examine": true, 00:19:36.128 "iobuf_small_cache_size": 128, 00:19:36.128 "iobuf_large_cache_size": 16 00:19:36.128 } 00:19:36.128 }, 00:19:36.128 { 00:19:36.128 "method": "bdev_raid_set_options", 00:19:36.128 "params": { 00:19:36.128 "process_window_size_kb": 1024, 00:19:36.128 "process_max_bandwidth_mb_sec": 0 00:19:36.128 } 00:19:36.128 }, 00:19:36.129 { 00:19:36.129 "method": "bdev_iscsi_set_options", 00:19:36.129 "params": { 00:19:36.129 "timeout_sec": 30 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "bdev_nvme_set_options", 00:19:36.129 "params": { 00:19:36.129 "action_on_timeout": "none", 00:19:36.129 "timeout_us": 0, 00:19:36.129 "timeout_admin_us": 0, 00:19:36.129 "keep_alive_timeout_ms": 10000, 00:19:36.129 "arbitration_burst": 0, 00:19:36.129 "low_priority_weight": 0, 00:19:36.129 "medium_priority_weight": 0, 00:19:36.129 "high_priority_weight": 0, 00:19:36.129 "nvme_adminq_poll_period_us": 10000, 00:19:36.129 "nvme_ioq_poll_period_us": 0, 00:19:36.129 "io_queue_requests": 0, 00:19:36.129 "delay_cmd_submit": true, 00:19:36.129 "transport_retry_count": 4, 00:19:36.129 "bdev_retry_count": 3, 00:19:36.129 "transport_ack_timeout": 0, 00:19:36.129 "ctrlr_loss_timeout_sec": 0, 00:19:36.129 "reconnect_delay_sec": 0, 00:19:36.129 "fast_io_fail_timeout_sec": 0, 00:19:36.129 "disable_auto_failback": false, 00:19:36.129 "generate_uuids": false, 00:19:36.129 "transport_tos": 0, 00:19:36.129 "nvme_error_stat": false, 00:19:36.129 "rdma_srq_size": 0, 00:19:36.129 "io_path_stat": false, 00:19:36.129 "allow_accel_sequence": false, 00:19:36.129 "rdma_max_cq_size": 0, 00:19:36.129 "rdma_cm_event_timeout_ms": 0, 00:19:36.129 "dhchap_digests": [ 00:19:36.129 "sha256", 00:19:36.129 "sha384", 00:19:36.129 "sha512" 00:19:36.129 ], 00:19:36.129 "dhchap_dhgroups": [ 00:19:36.129 "null", 00:19:36.129 "ffdhe2048", 00:19:36.129 "ffdhe3072", 00:19:36.129 "ffdhe4096", 00:19:36.129 "ffdhe6144", 00:19:36.129 "ffdhe8192" 00:19:36.129 ] 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "bdev_nvme_set_hotplug", 00:19:36.129 "params": { 00:19:36.129 "period_us": 100000, 00:19:36.129 "enable": false 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "bdev_malloc_create", 00:19:36.129 "params": { 00:19:36.129 "name": "malloc0", 00:19:36.129 "num_blocks": 8192, 00:19:36.129 "block_size": 4096, 00:19:36.129 "physical_block_size": 4096, 00:19:36.129 "uuid": "0dd77360-04f4-4983-807b-ac46d2b76a50", 00:19:36.129 "optimal_io_boundary": 0, 00:19:36.129 "md_size": 0, 00:19:36.129 "dif_type": 0, 00:19:36.129 "dif_is_head_of_md": false, 00:19:36.129 "dif_pi_format": 0 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "bdev_wait_for_examine" 00:19:36.129 } 00:19:36.129 ] 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "subsystem": "nbd", 00:19:36.129 "config": [] 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "subsystem": "scheduler", 00:19:36.129 "config": [ 00:19:36.129 { 00:19:36.129 "method": "framework_set_scheduler", 00:19:36.129 "params": { 00:19:36.129 "name": "static" 00:19:36.129 } 00:19:36.129 } 00:19:36.129 ] 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "subsystem": "nvmf", 00:19:36.129 "config": [ 00:19:36.129 { 00:19:36.129 "method": "nvmf_set_config", 00:19:36.129 "params": { 00:19:36.129 "discovery_filter": "match_any", 00:19:36.129 "admin_cmd_passthru": { 00:19:36.129 "identify_ctrlr": false 00:19:36.129 }, 00:19:36.129 "dhchap_digests": [ 00:19:36.129 "sha256", 00:19:36.129 "sha384", 00:19:36.129 "sha512" 00:19:36.129 ], 00:19:36.129 "dhchap_dhgroups": [ 00:19:36.129 "null", 00:19:36.129 "ffdhe2048", 00:19:36.129 "ffdhe3072", 00:19:36.129 "ffdhe4096", 00:19:36.129 "ffdhe6144", 00:19:36.129 "ffdhe8192" 00:19:36.129 ] 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "nvmf_set_max_subsystems", 00:19:36.129 "params": { 00:19:36.129 "max_subsystems": 1024 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "nvmf_set_crdt", 00:19:36.129 "params": { 00:19:36.129 "crdt1": 0, 00:19:36.129 "crdt2": 0, 00:19:36.129 "crdt3": 0 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "nvmf_create_transport", 00:19:36.129 "params": { 00:19:36.129 "trtype": "TCP", 00:19:36.129 "max_queue_depth": 128, 00:19:36.129 "max_io_qpairs_per_ctrlr": 127, 00:19:36.129 "in_capsule_data_size": 4096, 00:19:36.129 "max_io_size": 131072, 00:19:36.129 "io_unit_size": 131072, 00:19:36.129 "max_aq_depth": 128, 00:19:36.129 "num_shared_buffers": 511, 00:19:36.129 "buf_cache_size": 4294967295, 00:19:36.129 "dif_insert_or_strip": false, 00:19:36.129 "zcopy": false, 00:19:36.129 "c2h_success": false, 00:19:36.129 "sock_priority": 0, 00:19:36.129 "abort_timeout_sec": 1, 00:19:36.129 "ack_timeout": 0, 00:19:36.129 "data_wr_pool_size": 0 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "nvmf_create_subsystem", 00:19:36.129 "params": { 00:19:36.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.129 "allow_any_host": false, 00:19:36.129 "serial_number": "00000000000000000000", 00:19:36.129 "model_number": "SPDK bdev Controller", 00:19:36.129 "max_namespaces": 32, 00:19:36.129 "min_cntlid": 1, 00:19:36.129 "max_cntlid": 65519, 00:19:36.129 "ana_reporting": false 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "nvmf_subsystem_add_host", 00:19:36.129 "params": { 00:19:36.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.129 "host": "nqn.2016-06.io.spdk:host1", 00:19:36.129 "psk": "key0" 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "nvmf_subsystem_add_ns", 00:19:36.129 "params": { 00:19:36.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.129 "namespace": { 00:19:36.129 "nsid": 1, 00:19:36.129 "bdev_name": "malloc0", 00:19:36.129 "nguid": "0DD7736004F44983807BAC46D2B76A50", 00:19:36.129 "uuid": "0dd77360-04f4-4983-807b-ac46d2b76a50", 00:19:36.129 "no_auto_visible": false 00:19:36.129 } 00:19:36.129 } 00:19:36.129 }, 00:19:36.129 { 00:19:36.129 "method": "nvmf_subsystem_add_listener", 00:19:36.129 "params": { 00:19:36.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.129 "listen_address": { 00:19:36.129 "trtype": "TCP", 00:19:36.129 "adrfam": "IPv4", 00:19:36.129 "traddr": "10.0.0.2", 00:19:36.129 "trsvcid": "4420" 00:19:36.129 }, 00:19:36.129 "secure_channel": false, 00:19:36.129 "sock_impl": "ssl" 00:19:36.129 } 00:19:36.129 } 00:19:36.129 ] 00:19:36.129 } 00:19:36.129 ] 00:19:36.129 }' 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1275691 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1275691 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1275691 ']' 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.129 18:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.129 [2024-10-08 18:26:54.297433] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:36.129 [2024-10-08 18:26:54.297526] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.129 [2024-10-08 18:26:54.360110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.387 [2024-10-08 18:26:54.458656] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.387 [2024-10-08 18:26:54.458716] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.387 [2024-10-08 18:26:54.458745] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.387 [2024-10-08 18:26:54.458756] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.387 [2024-10-08 18:26:54.458765] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.387 [2024-10-08 18:26:54.459351] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.645 [2024-10-08 18:26:54.716757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.645 [2024-10-08 18:26:54.748777] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.645 [2024-10-08 18:26:54.749061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1275843 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1275843 /var/tmp/bdevperf.sock 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1275843 ']' 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.212 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:37.212 "subsystems": [ 00:19:37.212 { 00:19:37.212 "subsystem": "keyring", 00:19:37.212 "config": [ 00:19:37.212 { 00:19:37.212 "method": "keyring_file_add_key", 00:19:37.212 "params": { 00:19:37.212 "name": "key0", 00:19:37.212 "path": "/tmp/tmp.NPWhb1Ovl0" 00:19:37.212 } 00:19:37.212 } 00:19:37.212 ] 00:19:37.212 }, 00:19:37.212 { 00:19:37.212 "subsystem": "iobuf", 00:19:37.212 "config": [ 00:19:37.212 { 00:19:37.212 "method": "iobuf_set_options", 00:19:37.212 "params": { 00:19:37.212 "small_pool_count": 8192, 00:19:37.212 "large_pool_count": 1024, 00:19:37.212 "small_bufsize": 8192, 00:19:37.212 "large_bufsize": 135168 00:19:37.212 } 00:19:37.212 } 00:19:37.212 ] 00:19:37.212 }, 00:19:37.212 { 00:19:37.212 "subsystem": "sock", 00:19:37.212 "config": [ 00:19:37.212 { 00:19:37.212 "method": "sock_set_default_impl", 00:19:37.212 "params": { 00:19:37.212 "impl_name": "posix" 00:19:37.212 } 00:19:37.212 }, 00:19:37.212 { 00:19:37.212 "method": "sock_impl_set_options", 00:19:37.212 "params": { 00:19:37.212 "impl_name": "ssl", 00:19:37.212 "recv_buf_size": 4096, 00:19:37.212 "send_buf_size": 4096, 00:19:37.212 "enable_recv_pipe": true, 00:19:37.212 "enable_quickack": false, 00:19:37.212 "enable_placement_id": 0, 00:19:37.212 "enable_zerocopy_send_server": true, 00:19:37.212 "enable_zerocopy_send_client": false, 00:19:37.212 "zerocopy_threshold": 0, 00:19:37.212 "tls_version": 0, 00:19:37.212 "enable_ktls": false 00:19:37.212 } 00:19:37.212 }, 00:19:37.212 { 00:19:37.212 "method": "sock_impl_set_options", 00:19:37.212 "params": { 00:19:37.212 "impl_name": "posix", 00:19:37.212 "recv_buf_size": 2097152, 00:19:37.212 "send_buf_size": 2097152, 00:19:37.212 "enable_recv_pipe": true, 00:19:37.212 "enable_quickack": false, 00:19:37.212 "enable_placement_id": 0, 00:19:37.212 "enable_zerocopy_send_server": true, 00:19:37.212 "enable_zerocopy_send_client": false, 00:19:37.212 "zerocopy_threshold": 0, 00:19:37.212 "tls_version": 0, 00:19:37.212 "enable_ktls": false 00:19:37.212 } 00:19:37.212 } 00:19:37.212 ] 00:19:37.212 }, 00:19:37.212 { 00:19:37.212 "subsystem": "vmd", 00:19:37.212 "config": [] 00:19:37.212 }, 00:19:37.212 { 00:19:37.212 "subsystem": "accel", 00:19:37.212 "config": [ 00:19:37.212 { 00:19:37.212 "method": "accel_set_options", 00:19:37.212 "params": { 00:19:37.212 "small_cache_size": 128, 00:19:37.212 "large_cache_size": 16, 00:19:37.212 "task_count": 2048, 00:19:37.212 "sequence_count": 2048, 00:19:37.212 "buf_count": 2048 00:19:37.212 } 00:19:37.212 } 00:19:37.212 ] 00:19:37.212 }, 00:19:37.212 { 00:19:37.212 "subsystem": "bdev", 00:19:37.212 "config": [ 00:19:37.212 { 00:19:37.212 "method": "bdev_set_options", 00:19:37.212 "params": { 00:19:37.212 "bdev_io_pool_size": 65535, 00:19:37.212 "bdev_io_cache_size": 256, 00:19:37.212 "bdev_auto_examine": true, 00:19:37.212 "iobuf_small_cache_size": 128, 00:19:37.212 "iobuf_large_cache_size": 16 00:19:37.212 } 00:19:37.212 }, 00:19:37.212 { 00:19:37.212 "method": "bdev_raid_set_options", 00:19:37.212 "params": { 00:19:37.212 "process_window_size_kb": 1024, 00:19:37.212 "process_max_bandwidth_mb_sec": 0 00:19:37.212 } 00:19:37.212 }, 00:19:37.212 { 00:19:37.212 "method": "bdev_iscsi_set_options", 00:19:37.212 "params": { 00:19:37.212 "timeout_sec": 30 00:19:37.212 } 00:19:37.212 }, 00:19:37.212 { 00:19:37.212 "method": "bdev_nvme_set_options", 00:19:37.212 "params": { 00:19:37.212 "action_on_timeout": "none", 00:19:37.212 "timeout_us": 0, 00:19:37.212 "timeout_admin_us": 0, 00:19:37.212 "keep_alive_timeout_ms": 10000, 00:19:37.212 "arbitration_burst": 0, 00:19:37.212 "low_priority_weight": 0, 00:19:37.212 "medium_priority_weight": 0, 00:19:37.212 "high_priority_weight": 0, 00:19:37.212 "nvme_adminq_poll_period_us": 10000, 00:19:37.212 "nvme_ioq_poll_period_us": 0, 00:19:37.212 "io_queue_requests": 512, 00:19:37.212 "delay_cmd_submit": true, 00:19:37.212 "transport_retry_count": 4, 00:19:37.212 "bdev_retry_count": 3, 00:19:37.212 "transport_ack_timeout": 0, 00:19:37.212 "ctrlr_loss_timeout_sec": 0, 00:19:37.212 "reconnect_delay_sec": 0, 00:19:37.212 "fast_io_fail_timeout_sec": 0, 00:19:37.212 "disable_auto_failback": false, 00:19:37.212 "generate_uuids": false, 00:19:37.212 "transport_tos": 0, 00:19:37.212 "nvme_error_stat": false, 00:19:37.212 "rdma_srq_size": 0, 00:19:37.212 "io_path_stat": false, 00:19:37.212 "allow_accel_sequence": false, 00:19:37.212 "rdma_max_cq_size": 0, 00:19:37.213 "rdma_cm_event_timeout_ms": 0, 00:19:37.213 "dhchap_digests": [ 00:19:37.213 "sha256", 00:19:37.213 "sha384", 00:19:37.213 "sha512" 00:19:37.213 ], 00:19:37.213 "dhchap_dhgroups": [ 00:19:37.213 "null", 00:19:37.213 "ffdhe2048", 00:19:37.213 "ffdhe3072", 00:19:37.213 "ffdhe4096", 00:19:37.213 "ffdhe6144", 00:19:37.213 "ffdhe8192" 00:19:37.213 ] 00:19:37.213 } 00:19:37.213 }, 00:19:37.213 { 00:19:37.213 "method": "bdev_nvme_attach_controller", 00:19:37.213 "params": { 00:19:37.213 "name": "nvme0", 00:19:37.213 "trtype": "TCP", 00:19:37.213 "adrfam": "IPv4", 00:19:37.213 "traddr": "10.0.0.2", 00:19:37.213 "trsvcid": "4420", 00:19:37.213 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.213 "prchk_reftag": false, 00:19:37.213 "prchk_guard": false, 00:19:37.213 "ctrlr_loss_timeout_sec": 0, 00:19:37.213 "reconnect_delay_sec": 0, 00:19:37.213 "fast_io_fail_timeout_sec": 0, 00:19:37.213 "psk": "key0", 00:19:37.213 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.213 "hdgst": false, 00:19:37.213 "ddgst": false, 00:19:37.213 "multipath": "multipath" 00:19:37.213 } 00:19:37.213 }, 00:19:37.213 { 00:19:37.213 "method": "bdev_nvme_set_hotplug", 00:19:37.213 "params": { 00:19:37.213 "period_us": 100000, 00:19:37.213 "enable": false 00:19:37.213 } 00:19:37.213 }, 00:19:37.213 { 00:19:37.213 "method": "bdev_enable_histogram", 00:19:37.213 "params": { 00:19:37.213 "name": "nvme0n1", 00:19:37.213 "enable": true 00:19:37.213 } 00:19:37.213 }, 00:19:37.213 { 00:19:37.213 "method": "bdev_wait_for_examine" 00:19:37.213 } 00:19:37.213 ] 00:19:37.213 }, 00:19:37.213 { 00:19:37.213 "subsystem": "nbd", 00:19:37.213 "config": [] 00:19:37.213 } 00:19:37.213 ] 00:19:37.213 }' 00:19:37.213 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:37.213 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.213 [2024-10-08 18:26:55.357591] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:37.213 [2024-10-08 18:26:55.357686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275843 ] 00:19:37.213 [2024-10-08 18:26:55.415684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.471 [2024-10-08 18:26:55.522990] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.471 [2024-10-08 18:26:55.705992] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.404 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:38.404 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:38.404 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:38.404 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:38.404 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.404 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:38.662 Running I/O for 1 seconds... 00:19:39.594 3487.00 IOPS, 13.62 MiB/s 00:19:39.594 Latency(us) 00:19:39.594 [2024-10-08T16:26:57.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.594 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:39.594 Verification LBA range: start 0x0 length 0x2000 00:19:39.594 nvme0n1 : 1.02 3542.59 13.84 0.00 0.00 35817.70 8058.50 33981.63 00:19:39.594 [2024-10-08T16:26:57.881Z] =================================================================================================================== 00:19:39.594 [2024-10-08T16:26:57.881Z] Total : 3542.59 13.84 0.00 0.00 35817.70 8058.50 33981.63 00:19:39.594 { 00:19:39.594 "results": [ 00:19:39.594 { 00:19:39.594 "job": "nvme0n1", 00:19:39.594 "core_mask": "0x2", 00:19:39.594 "workload": "verify", 00:19:39.594 "status": "finished", 00:19:39.594 "verify_range": { 00:19:39.594 "start": 0, 00:19:39.594 "length": 8192 00:19:39.594 }, 00:19:39.594 "queue_depth": 128, 00:19:39.594 "io_size": 4096, 00:19:39.594 "runtime": 1.02044, 00:19:39.594 "iops": 3542.5894712084983, 00:19:39.594 "mibps": 13.838240121908196, 00:19:39.594 "io_failed": 0, 00:19:39.594 "io_timeout": 0, 00:19:39.594 "avg_latency_us": 35817.69850642898, 00:19:39.594 "min_latency_us": 8058.500740740741, 00:19:39.594 "max_latency_us": 33981.62962962963 00:19:39.594 } 00:19:39.594 ], 00:19:39.594 "core_count": 1 00:19:39.594 } 00:19:39.594 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:39.594 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:39.594 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:39.594 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:19:39.594 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:19:39.594 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:39.595 nvmf_trace.0 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1275843 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1275843 ']' 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1275843 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:39.595 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275843 00:19:39.854 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:39.854 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:39.854 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275843' 00:19:39.854 killing process with pid 1275843 00:19:39.854 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1275843 00:19:39.854 Received shutdown signal, test time was about 1.000000 seconds 00:19:39.854 00:19:39.854 Latency(us) 00:19:39.854 [2024-10-08T16:26:58.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.854 [2024-10-08T16:26:58.141Z] =================================================================================================================== 00:19:39.854 [2024-10-08T16:26:58.141Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.854 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1275843 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:40.112 rmmod nvme_tcp 00:19:40.112 rmmod nvme_fabrics 00:19:40.112 rmmod nvme_keyring 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1275691 ']' 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1275691 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1275691 ']' 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1275691 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1275691 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1275691' 00:19:40.112 killing process with pid 1275691 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1275691 00:19:40.112 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1275691 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.369 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.WHQ3tZooah /tmp/tmp.gx0K6o5M3m /tmp/tmp.NPWhb1Ovl0 00:19:42.904 00:19:42.904 real 1m25.449s 00:19:42.904 user 2m23.391s 00:19:42.904 sys 0m25.351s 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.904 ************************************ 00:19:42.904 END TEST nvmf_tls 00:19:42.904 ************************************ 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:42.904 ************************************ 00:19:42.904 START TEST nvmf_fips 00:19:42.904 ************************************ 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:42.904 * Looking for test storage... 00:19:42.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.904 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:42.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.905 --rc genhtml_branch_coverage=1 00:19:42.905 --rc genhtml_function_coverage=1 00:19:42.905 --rc genhtml_legend=1 00:19:42.905 --rc geninfo_all_blocks=1 00:19:42.905 --rc geninfo_unexecuted_blocks=1 00:19:42.905 00:19:42.905 ' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:42.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.905 --rc genhtml_branch_coverage=1 00:19:42.905 --rc genhtml_function_coverage=1 00:19:42.905 --rc genhtml_legend=1 00:19:42.905 --rc geninfo_all_blocks=1 00:19:42.905 --rc geninfo_unexecuted_blocks=1 00:19:42.905 00:19:42.905 ' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:42.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.905 --rc genhtml_branch_coverage=1 00:19:42.905 --rc genhtml_function_coverage=1 00:19:42.905 --rc genhtml_legend=1 00:19:42.905 --rc geninfo_all_blocks=1 00:19:42.905 --rc geninfo_unexecuted_blocks=1 00:19:42.905 00:19:42.905 ' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:42.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.905 --rc genhtml_branch_coverage=1 00:19:42.905 --rc genhtml_function_coverage=1 00:19:42.905 --rc genhtml_legend=1 00:19:42.905 --rc geninfo_all_blocks=1 00:19:42.905 --rc geninfo_unexecuted_blocks=1 00:19:42.905 00:19:42.905 ' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:42.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:42.905 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:42.906 Error setting digest 00:19:42.906 4012DB58217F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:42.906 4012DB58217F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:42.906 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:44.809 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:44.809 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:44.809 Found net devices under 0000:09:00.0: cvl_0_0 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:44.809 Found net devices under 0000:09:00.1: cvl_0_1 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.809 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.809 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:44.809 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:44.809 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.809 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.809 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.809 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.809 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:44.809 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:45.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:19:45.068 00:19:45.068 --- 10.0.0.2 ping statistics --- 00:19:45.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.068 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:19:45.068 00:19:45.068 --- 10.0.0.1 ping statistics --- 00:19:45.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.068 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1278323 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1278323 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1278323 ']' 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.068 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.069 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:45.069 [2024-10-08 18:27:03.237661] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:45.069 [2024-10-08 18:27:03.237741] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.069 [2024-10-08 18:27:03.300916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.327 [2024-10-08 18:27:03.415216] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.327 [2024-10-08 18:27:03.415276] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.327 [2024-10-08 18:27:03.415315] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.327 [2024-10-08 18:27:03.415327] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.327 [2024-10-08 18:27:03.415337] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.327 [2024-10-08 18:27:03.415981] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.lRy 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.lRy 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.lRy 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.lRy 00:19:45.327 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:45.892 [2024-10-08 18:27:03.875796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.892 [2024-10-08 18:27:03.891785] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.892 [2024-10-08 18:27:03.892048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.892 malloc0 00:19:45.892 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.892 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1278355 00:19:45.892 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.892 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1278355 /var/tmp/bdevperf.sock 00:19:45.892 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1278355 ']' 00:19:45.892 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.892 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.892 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.892 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.892 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:45.892 [2024-10-08 18:27:04.036093] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:19:45.892 [2024-10-08 18:27:04.036183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278355 ] 00:19:45.892 [2024-10-08 18:27:04.095477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.150 [2024-10-08 18:27:04.206849] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.150 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.150 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:19:46.150 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.lRy 00:19:46.411 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.671 [2024-10-08 18:27:04.864536] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.671 TLSTESTn1 00:19:46.671 18:27:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:46.928 Running I/O for 10 seconds... 00:19:48.794 3184.00 IOPS, 12.44 MiB/s [2024-10-08T16:27:08.454Z] 3269.50 IOPS, 12.77 MiB/s [2024-10-08T16:27:09.387Z] 3278.33 IOPS, 12.81 MiB/s [2024-10-08T16:27:10.320Z] 3292.50 IOPS, 12.86 MiB/s [2024-10-08T16:27:11.252Z] 3305.80 IOPS, 12.91 MiB/s [2024-10-08T16:27:12.185Z] 3315.00 IOPS, 12.95 MiB/s [2024-10-08T16:27:13.118Z] 3324.57 IOPS, 12.99 MiB/s [2024-10-08T16:27:14.490Z] 3317.50 IOPS, 12.96 MiB/s [2024-10-08T16:27:15.424Z] 3316.78 IOPS, 12.96 MiB/s [2024-10-08T16:27:15.424Z] 3322.30 IOPS, 12.98 MiB/s 00:19:57.137 Latency(us) 00:19:57.137 [2024-10-08T16:27:15.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.137 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:57.137 Verification LBA range: start 0x0 length 0x2000 00:19:57.137 TLSTESTn1 : 10.02 3329.02 13.00 0.00 0.00 38373.66 8301.23 36700.16 00:19:57.137 [2024-10-08T16:27:15.424Z] =================================================================================================================== 00:19:57.137 [2024-10-08T16:27:15.424Z] Total : 3329.02 13.00 0.00 0.00 38373.66 8301.23 36700.16 00:19:57.137 { 00:19:57.137 "results": [ 00:19:57.137 { 00:19:57.137 "job": "TLSTESTn1", 00:19:57.137 "core_mask": "0x4", 00:19:57.137 "workload": "verify", 00:19:57.137 "status": "finished", 00:19:57.137 "verify_range": { 00:19:57.137 "start": 0, 00:19:57.137 "length": 8192 00:19:57.137 }, 00:19:57.137 "queue_depth": 128, 00:19:57.137 "io_size": 4096, 00:19:57.137 "runtime": 10.018273, 00:19:57.137 "iops": 3329.016887441578, 00:19:57.137 "mibps": 13.003972216568664, 00:19:57.137 "io_failed": 0, 00:19:57.137 "io_timeout": 0, 00:19:57.137 "avg_latency_us": 38373.661774370696, 00:19:57.137 "min_latency_us": 8301.226666666667, 00:19:57.137 "max_latency_us": 36700.16 00:19:57.137 } 00:19:57.137 ], 00:19:57.137 "core_count": 1 00:19:57.137 } 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:57.137 nvmf_trace.0 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1278355 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1278355 ']' 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1278355 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1278355 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1278355' 00:19:57.137 killing process with pid 1278355 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1278355 00:19:57.137 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.137 00:19:57.137 Latency(us) 00:19:57.137 [2024-10-08T16:27:15.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.137 [2024-10-08T16:27:15.424Z] =================================================================================================================== 00:19:57.137 [2024-10-08T16:27:15.424Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.137 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1278355 00:19:57.395 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:57.395 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:57.395 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:57.395 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:57.396 rmmod nvme_tcp 00:19:57.396 rmmod nvme_fabrics 00:19:57.396 rmmod nvme_keyring 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1278323 ']' 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1278323 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1278323 ']' 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1278323 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1278323 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1278323' 00:19:57.396 killing process with pid 1278323 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1278323 00:19:57.396 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1278323 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.654 18:27:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.201 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:00.201 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.lRy 00:20:00.201 00:20:00.201 real 0m17.337s 00:20:00.201 user 0m19.642s 00:20:00.201 sys 0m6.782s 00:20:00.201 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:00.201 18:27:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:00.201 ************************************ 00:20:00.201 END TEST nvmf_fips 00:20:00.201 ************************************ 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:00.201 ************************************ 00:20:00.201 START TEST nvmf_control_msg_list 00:20:00.201 ************************************ 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:00.201 * Looking for test storage... 00:20:00.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.201 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:00.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.202 --rc genhtml_branch_coverage=1 00:20:00.202 --rc genhtml_function_coverage=1 00:20:00.202 --rc genhtml_legend=1 00:20:00.202 --rc geninfo_all_blocks=1 00:20:00.202 --rc geninfo_unexecuted_blocks=1 00:20:00.202 00:20:00.202 ' 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:00.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.202 --rc genhtml_branch_coverage=1 00:20:00.202 --rc genhtml_function_coverage=1 00:20:00.202 --rc genhtml_legend=1 00:20:00.202 --rc geninfo_all_blocks=1 00:20:00.202 --rc geninfo_unexecuted_blocks=1 00:20:00.202 00:20:00.202 ' 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:00.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.202 --rc genhtml_branch_coverage=1 00:20:00.202 --rc genhtml_function_coverage=1 00:20:00.202 --rc genhtml_legend=1 00:20:00.202 --rc geninfo_all_blocks=1 00:20:00.202 --rc geninfo_unexecuted_blocks=1 00:20:00.202 00:20:00.202 ' 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:00.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.202 --rc genhtml_branch_coverage=1 00:20:00.202 --rc genhtml_function_coverage=1 00:20:00.202 --rc genhtml_legend=1 00:20:00.202 --rc geninfo_all_blocks=1 00:20:00.202 --rc geninfo_unexecuted_blocks=1 00:20:00.202 00:20:00.202 ' 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.202 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.203 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:00.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:00.204 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:02.740 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:02.740 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:02.740 Found net devices under 0000:09:00.0: cvl_0_0 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.740 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:02.740 Found net devices under 0000:09:00.1: cvl_0_1 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:02.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:20:02.741 00:20:02.741 --- 10.0.0.2 ping statistics --- 00:20:02.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.741 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:20:02.741 00:20:02.741 --- 10.0.0.1 ping statistics --- 00:20:02.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.741 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1282241 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1282241 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1282241 ']' 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:02.741 [2024-10-08 18:27:20.648976] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:20:02.741 [2024-10-08 18:27:20.649063] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.741 [2024-10-08 18:27:20.713267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.741 [2024-10-08 18:27:20.819981] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.741 [2024-10-08 18:27:20.820030] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.741 [2024-10-08 18:27:20.820059] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.741 [2024-10-08 18:27:20.820069] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.741 [2024-10-08 18:27:20.820079] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.741 [2024-10-08 18:27:20.820687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:02.741 [2024-10-08 18:27:20.957633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:02.741 Malloc0 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.741 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:02.741 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.741 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:02.741 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.741 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:02.741 [2024-10-08 18:27:21.009366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.741 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.741 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1282276 00:20:02.741 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:02.741 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1282277 00:20:02.741 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:02.742 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1282278 00:20:02.742 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1282276 00:20:02.742 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:02.999 [2024-10-08 18:27:21.068307] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:03.000 [2024-10-08 18:27:21.068580] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:03.000 [2024-10-08 18:27:21.068869] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:03.932 Initializing NVMe Controllers 00:20:03.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:03.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:03.932 Initialization complete. Launching workers. 00:20:03.932 ======================================================== 00:20:03.932 Latency(us) 00:20:03.932 Device Information : IOPS MiB/s Average min max 00:20:03.932 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40888.39 40645.97 40959.59 00:20:03.932 ======================================================== 00:20:03.932 Total : 25.00 0.10 40888.39 40645.97 40959.59 00:20:03.932 00:20:03.932 Initializing NVMe Controllers 00:20:03.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:03.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:03.932 Initialization complete. Launching workers. 00:20:03.932 ======================================================== 00:20:03.932 Latency(us) 00:20:03.932 Device Information : IOPS MiB/s Average min max 00:20:03.932 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40881.51 40475.83 40973.08 00:20:03.932 ======================================================== 00:20:03.932 Total : 25.00 0.10 40881.51 40475.83 40973.08 00:20:03.932 00:20:03.932 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1282277 00:20:04.190 Initializing NVMe Controllers 00:20:04.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:04.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:04.190 Initialization complete. Launching workers. 00:20:04.190 ======================================================== 00:20:04.190 Latency(us) 00:20:04.190 Device Information : IOPS MiB/s Average min max 00:20:04.190 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5359.99 20.94 186.20 161.36 381.25 00:20:04.190 ======================================================== 00:20:04.190 Total : 5359.99 20.94 186.20 161.36 381.25 00:20:04.190 00:20:04.190 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1282278 00:20:04.190 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:04.190 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:04.190 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:04.190 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:04.190 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:04.191 rmmod nvme_tcp 00:20:04.191 rmmod nvme_fabrics 00:20:04.191 rmmod nvme_keyring 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1282241 ']' 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1282241 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1282241 ']' 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1282241 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1282241 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1282241' 00:20:04.191 killing process with pid 1282241 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1282241 00:20:04.191 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1282241 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.450 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:06.986 00:20:06.986 real 0m6.670s 00:20:06.986 user 0m5.843s 00:20:06.986 sys 0m2.694s 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:06.986 ************************************ 00:20:06.986 END TEST nvmf_control_msg_list 00:20:06.986 ************************************ 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:06.986 ************************************ 00:20:06.986 START TEST nvmf_wait_for_buf 00:20:06.986 ************************************ 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:06.986 * Looking for test storage... 00:20:06.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:06.986 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:06.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.987 --rc genhtml_branch_coverage=1 00:20:06.987 --rc genhtml_function_coverage=1 00:20:06.987 --rc genhtml_legend=1 00:20:06.987 --rc geninfo_all_blocks=1 00:20:06.987 --rc geninfo_unexecuted_blocks=1 00:20:06.987 00:20:06.987 ' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:06.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.987 --rc genhtml_branch_coverage=1 00:20:06.987 --rc genhtml_function_coverage=1 00:20:06.987 --rc genhtml_legend=1 00:20:06.987 --rc geninfo_all_blocks=1 00:20:06.987 --rc geninfo_unexecuted_blocks=1 00:20:06.987 00:20:06.987 ' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:06.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.987 --rc genhtml_branch_coverage=1 00:20:06.987 --rc genhtml_function_coverage=1 00:20:06.987 --rc genhtml_legend=1 00:20:06.987 --rc geninfo_all_blocks=1 00:20:06.987 --rc geninfo_unexecuted_blocks=1 00:20:06.987 00:20:06.987 ' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:06.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.987 --rc genhtml_branch_coverage=1 00:20:06.987 --rc genhtml_function_coverage=1 00:20:06.987 --rc genhtml_legend=1 00:20:06.987 --rc geninfo_all_blocks=1 00:20:06.987 --rc geninfo_unexecuted_blocks=1 00:20:06.987 00:20:06.987 ' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:06.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:06.987 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:08.897 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:08.897 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:08.897 Found net devices under 0000:09:00.0: cvl_0_0 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:08.897 Found net devices under 0000:09:00.1: cvl_0_1 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.897 18:27:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.897 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.897 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.897 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:08.897 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.897 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.897 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.897 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:08.897 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:08.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:20:08.897 00:20:08.898 --- 10.0.0.2 ping statistics --- 00:20:08.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.898 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:20:08.898 00:20:08.898 --- 10.0.0.1 ping statistics --- 00:20:08.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.898 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1284472 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1284472 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1284472 ']' 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.898 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.156 [2024-10-08 18:27:27.196153] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:20:09.156 [2024-10-08 18:27:27.196233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.156 [2024-10-08 18:27:27.258870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.156 [2024-10-08 18:27:27.364978] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.156 [2024-10-08 18:27:27.365026] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.156 [2024-10-08 18:27:27.365055] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.156 [2024-10-08 18:27:27.365066] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.156 [2024-10-08 18:27:27.365076] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.156 [2024-10-08 18:27:27.365687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.156 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.156 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:20:09.156 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:09.156 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.156 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.157 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.157 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:09.157 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.466 Malloc0 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.466 [2024-10-08 18:27:27.564980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:09.466 [2024-10-08 18:27:27.589198] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.466 18:27:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:09.466 [2024-10-08 18:27:27.666412] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:10.894 Initializing NVMe Controllers 00:20:10.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:10.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:10.894 Initialization complete. Launching workers. 00:20:10.894 ======================================================== 00:20:10.894 Latency(us) 00:20:10.894 Device Information : IOPS MiB/s Average min max 00:20:10.894 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 126.00 15.75 33090.97 23972.57 63843.62 00:20:10.894 ======================================================== 00:20:10.894 Total : 126.00 15.75 33090.97 23972.57 63843.62 00:20:10.894 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1990 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1990 -eq 0 ]] 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.152 rmmod nvme_tcp 00:20:11.152 rmmod nvme_fabrics 00:20:11.152 rmmod nvme_keyring 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1284472 ']' 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1284472 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1284472 ']' 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1284472 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1284472 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:11.152 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1284472' 00:20:11.153 killing process with pid 1284472 00:20:11.153 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1284472 00:20:11.153 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1284472 00:20:11.412 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:11.412 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:11.412 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:11.412 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:11.412 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:20:11.412 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:11.412 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:20:11.412 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.412 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.412 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.413 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.413 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.318 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:13.318 00:20:13.318 real 0m6.852s 00:20:13.318 user 0m3.255s 00:20:13.318 sys 0m2.051s 00:20:13.318 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:13.318 18:27:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:13.318 ************************************ 00:20:13.318 END TEST nvmf_wait_for_buf 00:20:13.318 ************************************ 00:20:13.576 18:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:13.576 18:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:13.576 18:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:13.576 18:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:13.576 18:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:13.576 18:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:16.108 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.108 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:16.109 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:16.109 Found net devices under 0000:09:00.0: cvl_0_0 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:16.109 Found net devices under 0000:09:00.1: cvl_0_1 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:16.109 ************************************ 00:20:16.109 START TEST nvmf_perf_adq 00:20:16.109 ************************************ 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:16.109 * Looking for test storage... 00:20:16.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:20:16.109 18:27:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:16.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.109 --rc genhtml_branch_coverage=1 00:20:16.109 --rc genhtml_function_coverage=1 00:20:16.109 --rc genhtml_legend=1 00:20:16.109 --rc geninfo_all_blocks=1 00:20:16.109 --rc geninfo_unexecuted_blocks=1 00:20:16.109 00:20:16.109 ' 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:16.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.109 --rc genhtml_branch_coverage=1 00:20:16.109 --rc genhtml_function_coverage=1 00:20:16.109 --rc genhtml_legend=1 00:20:16.109 --rc geninfo_all_blocks=1 00:20:16.109 --rc geninfo_unexecuted_blocks=1 00:20:16.109 00:20:16.109 ' 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:16.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.109 --rc genhtml_branch_coverage=1 00:20:16.109 --rc genhtml_function_coverage=1 00:20:16.109 --rc genhtml_legend=1 00:20:16.109 --rc geninfo_all_blocks=1 00:20:16.109 --rc geninfo_unexecuted_blocks=1 00:20:16.109 00:20:16.109 ' 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:16.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.109 --rc genhtml_branch_coverage=1 00:20:16.109 --rc genhtml_function_coverage=1 00:20:16.109 --rc genhtml_legend=1 00:20:16.109 --rc geninfo_all_blocks=1 00:20:16.109 --rc geninfo_unexecuted_blocks=1 00:20:16.109 00:20:16.109 ' 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.109 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:16.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.110 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:18.014 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:18.015 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:18.015 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:18.015 Found net devices under 0000:09:00.0: cvl_0_0 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:18.015 Found net devices under 0000:09:00.1: cvl_0_1 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:18.015 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:18.585 18:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:21.117 18:27:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:26.400 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:26.400 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:26.400 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:26.401 Found net devices under 0000:09:00.0: cvl_0_0 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:26.401 Found net devices under 0000:09:00.1: cvl_0_1 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.401 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:26.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:20:26.401 00:20:26.401 --- 10.0.0.2 ping statistics --- 00:20:26.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.401 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:20:26.401 00:20:26.401 --- 10.0.0.1 ping statistics --- 00:20:26.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.401 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1289227 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1289227 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1289227 ']' 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.401 [2024-10-08 18:27:44.089970] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:20:26.401 [2024-10-08 18:27:44.090048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.401 [2024-10-08 18:27:44.154100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.401 [2024-10-08 18:27:44.260217] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.401 [2024-10-08 18:27:44.260279] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.401 [2024-10-08 18:27:44.260315] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.401 [2024-10-08 18:27:44.260327] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.401 [2024-10-08 18:27:44.260336] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.401 [2024-10-08 18:27:44.262016] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.401 [2024-10-08 18:27:44.262136] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.401 [2024-10-08 18:27:44.262185] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.401 [2024-10-08 18:27:44.262188] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.401 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.402 [2024-10-08 18:27:44.489151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.402 Malloc1 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.402 [2024-10-08 18:27:44.540446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1289342 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:26.402 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:28.303 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:28.303 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.303 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.303 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.303 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:28.303 "tick_rate": 2700000000, 00:20:28.303 "poll_groups": [ 00:20:28.303 { 00:20:28.303 "name": "nvmf_tgt_poll_group_000", 00:20:28.303 "admin_qpairs": 1, 00:20:28.303 "io_qpairs": 1, 00:20:28.303 "current_admin_qpairs": 1, 00:20:28.303 "current_io_qpairs": 1, 00:20:28.303 "pending_bdev_io": 0, 00:20:28.303 "completed_nvme_io": 19767, 00:20:28.303 "transports": [ 00:20:28.303 { 00:20:28.303 "trtype": "TCP" 00:20:28.303 } 00:20:28.303 ] 00:20:28.303 }, 00:20:28.303 { 00:20:28.303 "name": "nvmf_tgt_poll_group_001", 00:20:28.303 "admin_qpairs": 0, 00:20:28.303 "io_qpairs": 1, 00:20:28.303 "current_admin_qpairs": 0, 00:20:28.303 "current_io_qpairs": 1, 00:20:28.303 "pending_bdev_io": 0, 00:20:28.303 "completed_nvme_io": 19594, 00:20:28.303 "transports": [ 00:20:28.303 { 00:20:28.303 "trtype": "TCP" 00:20:28.303 } 00:20:28.303 ] 00:20:28.303 }, 00:20:28.303 { 00:20:28.303 "name": "nvmf_tgt_poll_group_002", 00:20:28.303 "admin_qpairs": 0, 00:20:28.303 "io_qpairs": 1, 00:20:28.303 "current_admin_qpairs": 0, 00:20:28.303 "current_io_qpairs": 1, 00:20:28.303 "pending_bdev_io": 0, 00:20:28.303 "completed_nvme_io": 19825, 00:20:28.303 "transports": [ 00:20:28.303 { 00:20:28.303 "trtype": "TCP" 00:20:28.303 } 00:20:28.303 ] 00:20:28.303 }, 00:20:28.303 { 00:20:28.303 "name": "nvmf_tgt_poll_group_003", 00:20:28.303 "admin_qpairs": 0, 00:20:28.303 "io_qpairs": 1, 00:20:28.303 "current_admin_qpairs": 0, 00:20:28.303 "current_io_qpairs": 1, 00:20:28.303 "pending_bdev_io": 0, 00:20:28.303 "completed_nvme_io": 19519, 00:20:28.303 "transports": [ 00:20:28.303 { 00:20:28.303 "trtype": "TCP" 00:20:28.303 } 00:20:28.303 ] 00:20:28.303 } 00:20:28.303 ] 00:20:28.303 }' 00:20:28.303 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:28.303 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:28.561 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:28.561 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:28.561 18:27:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1289342 00:20:36.670 Initializing NVMe Controllers 00:20:36.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:36.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:36.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:36.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:36.670 Initialization complete. Launching workers. 00:20:36.670 ======================================================== 00:20:36.670 Latency(us) 00:20:36.670 Device Information : IOPS MiB/s Average min max 00:20:36.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10165.00 39.71 6295.89 2513.31 10823.90 00:20:36.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10347.70 40.42 6186.43 2586.52 9938.62 00:20:36.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10519.10 41.09 6083.83 2770.79 9995.02 00:20:36.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10366.20 40.49 6174.14 2704.28 10268.46 00:20:36.670 ======================================================== 00:20:36.670 Total : 41397.99 161.71 6184.16 2513.31 10823.90 00:20:36.670 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.671 rmmod nvme_tcp 00:20:36.671 rmmod nvme_fabrics 00:20:36.671 rmmod nvme_keyring 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1289227 ']' 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1289227 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1289227 ']' 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1289227 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1289227 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1289227' 00:20:36.671 killing process with pid 1289227 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1289227 00:20:36.671 18:27:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1289227 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.930 18:27:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.835 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:38.835 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:38.835 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:38.835 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:39.772 18:27:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:41.671 18:27:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:46.948 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:46.948 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:46.948 Found net devices under 0000:09:00.0: cvl_0_0 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:46.948 Found net devices under 0000:09:00.1: cvl_0_1 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.948 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:46.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:20:46.949 00:20:46.949 --- 10.0.0.2 ping statistics --- 00:20:46.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.949 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:20:46.949 00:20:46.949 --- 10.0.0.1 ping statistics --- 00:20:46.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.949 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:46.949 net.core.busy_poll = 1 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:46.949 net.core.busy_read = 1 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:46.949 18:28:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1291958 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1291958 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1291958 ']' 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:46.949 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.949 [2024-10-08 18:28:05.072611] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:20:46.949 [2024-10-08 18:28:05.072690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.949 [2024-10-08 18:28:05.136228] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.208 [2024-10-08 18:28:05.244432] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.208 [2024-10-08 18:28:05.244480] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.208 [2024-10-08 18:28:05.244509] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.208 [2024-10-08 18:28:05.244519] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.208 [2024-10-08 18:28:05.244528] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.208 [2024-10-08 18:28:05.246060] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.208 [2024-10-08 18:28:05.246126] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.208 [2024-10-08 18:28:05.246190] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.208 [2024-10-08 18:28:05.246194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.208 [2024-10-08 18:28:05.474490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.208 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.467 Malloc1 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:47.467 [2024-10-08 18:28:05.526128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1291988 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:47.467 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:49.367 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:49.367 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.367 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.367 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.367 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:49.367 "tick_rate": 2700000000, 00:20:49.367 "poll_groups": [ 00:20:49.367 { 00:20:49.367 "name": "nvmf_tgt_poll_group_000", 00:20:49.367 "admin_qpairs": 1, 00:20:49.367 "io_qpairs": 0, 00:20:49.367 "current_admin_qpairs": 1, 00:20:49.367 "current_io_qpairs": 0, 00:20:49.367 "pending_bdev_io": 0, 00:20:49.367 "completed_nvme_io": 0, 00:20:49.367 "transports": [ 00:20:49.367 { 00:20:49.367 "trtype": "TCP" 00:20:49.367 } 00:20:49.367 ] 00:20:49.367 }, 00:20:49.367 { 00:20:49.367 "name": "nvmf_tgt_poll_group_001", 00:20:49.367 "admin_qpairs": 0, 00:20:49.367 "io_qpairs": 4, 00:20:49.367 "current_admin_qpairs": 0, 00:20:49.367 "current_io_qpairs": 4, 00:20:49.367 "pending_bdev_io": 0, 00:20:49.367 "completed_nvme_io": 34384, 00:20:49.367 "transports": [ 00:20:49.367 { 00:20:49.367 "trtype": "TCP" 00:20:49.367 } 00:20:49.367 ] 00:20:49.367 }, 00:20:49.367 { 00:20:49.367 "name": "nvmf_tgt_poll_group_002", 00:20:49.367 "admin_qpairs": 0, 00:20:49.367 "io_qpairs": 0, 00:20:49.367 "current_admin_qpairs": 0, 00:20:49.367 "current_io_qpairs": 0, 00:20:49.367 "pending_bdev_io": 0, 00:20:49.367 "completed_nvme_io": 0, 00:20:49.367 "transports": [ 00:20:49.367 { 00:20:49.367 "trtype": "TCP" 00:20:49.367 } 00:20:49.367 ] 00:20:49.367 }, 00:20:49.367 { 00:20:49.367 "name": "nvmf_tgt_poll_group_003", 00:20:49.367 "admin_qpairs": 0, 00:20:49.367 "io_qpairs": 0, 00:20:49.367 "current_admin_qpairs": 0, 00:20:49.367 "current_io_qpairs": 0, 00:20:49.367 "pending_bdev_io": 0, 00:20:49.367 "completed_nvme_io": 0, 00:20:49.367 "transports": [ 00:20:49.367 { 00:20:49.367 "trtype": "TCP" 00:20:49.367 } 00:20:49.367 ] 00:20:49.367 } 00:20:49.367 ] 00:20:49.367 }' 00:20:49.367 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:49.367 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:49.367 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:20:49.367 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:20:49.367 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1291988 00:20:57.482 Initializing NVMe Controllers 00:20:57.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:57.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:57.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:57.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:57.482 Initialization complete. Launching workers. 00:20:57.483 ======================================================== 00:20:57.483 Latency(us) 00:20:57.483 Device Information : IOPS MiB/s Average min max 00:20:57.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4542.80 17.75 14094.50 1533.33 60758.45 00:20:57.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4619.60 18.05 13860.77 1882.63 62486.56 00:20:57.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4457.60 17.41 14405.23 1884.38 59548.73 00:20:57.483 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4095.50 16.00 15635.37 1888.77 62059.53 00:20:57.483 ======================================================== 00:20:57.483 Total : 17715.50 69.20 14467.96 1533.33 62486.56 00:20:57.483 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.483 rmmod nvme_tcp 00:20:57.483 rmmod nvme_fabrics 00:20:57.483 rmmod nvme_keyring 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1291958 ']' 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1291958 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1291958 ']' 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1291958 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1291958 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1291958' 00:20:57.483 killing process with pid 1291958 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1291958 00:20:57.483 18:28:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1291958 00:20:58.047 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:58.047 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:58.047 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:58.047 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:58.047 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:20:58.047 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:58.047 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:20:58.047 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:58.047 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:58.048 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.048 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.048 18:28:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:01.373 00:21:01.373 real 0m45.183s 00:21:01.373 user 2m40.353s 00:21:01.373 sys 0m9.397s 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.373 ************************************ 00:21:01.373 END TEST nvmf_perf_adq 00:21:01.373 ************************************ 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:01.373 ************************************ 00:21:01.373 START TEST nvmf_shutdown 00:21:01.373 ************************************ 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:01.373 * Looking for test storage... 00:21:01.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:01.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.373 --rc genhtml_branch_coverage=1 00:21:01.373 --rc genhtml_function_coverage=1 00:21:01.373 --rc genhtml_legend=1 00:21:01.373 --rc geninfo_all_blocks=1 00:21:01.373 --rc geninfo_unexecuted_blocks=1 00:21:01.373 00:21:01.373 ' 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:01.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.373 --rc genhtml_branch_coverage=1 00:21:01.373 --rc genhtml_function_coverage=1 00:21:01.373 --rc genhtml_legend=1 00:21:01.373 --rc geninfo_all_blocks=1 00:21:01.373 --rc geninfo_unexecuted_blocks=1 00:21:01.373 00:21:01.373 ' 00:21:01.373 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:01.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.374 --rc genhtml_branch_coverage=1 00:21:01.374 --rc genhtml_function_coverage=1 00:21:01.374 --rc genhtml_legend=1 00:21:01.374 --rc geninfo_all_blocks=1 00:21:01.374 --rc geninfo_unexecuted_blocks=1 00:21:01.374 00:21:01.374 ' 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:01.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.374 --rc genhtml_branch_coverage=1 00:21:01.374 --rc genhtml_function_coverage=1 00:21:01.374 --rc genhtml_legend=1 00:21:01.374 --rc geninfo_all_blocks=1 00:21:01.374 --rc geninfo_unexecuted_blocks=1 00:21:01.374 00:21:01.374 ' 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:01.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:01.374 ************************************ 00:21:01.374 START TEST nvmf_shutdown_tc1 00:21:01.374 ************************************ 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:01.374 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:03.295 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:03.295 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:03.295 Found net devices under 0000:09:00.0: cvl_0_0 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:03.295 Found net devices under 0000:09:00.1: cvl_0_1 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.295 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.296 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:03.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:21:03.554 00:21:03.554 --- 10.0.0.2 ping statistics --- 00:21:03.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.554 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:21:03.554 00:21:03.554 --- 10.0.0.1 ping statistics --- 00:21:03.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.554 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1295412 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1295412 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1295412 ']' 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.554 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.554 [2024-10-08 18:28:21.670823] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:03.554 [2024-10-08 18:28:21.670920] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.554 [2024-10-08 18:28:21.740345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.812 [2024-10-08 18:28:21.853243] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.812 [2024-10-08 18:28:21.853311] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.812 [2024-10-08 18:28:21.853352] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.812 [2024-10-08 18:28:21.853365] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.812 [2024-10-08 18:28:21.853376] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.812 [2024-10-08 18:28:21.855116] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.812 [2024-10-08 18:28:21.855174] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.812 [2024-10-08 18:28:21.855199] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:21:03.812 [2024-10-08 18:28:21.855204] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.812 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:03.812 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:03.812 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:03.812 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:03.812 18:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.812 [2024-10-08 18:28:22.021921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:03.812 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.813 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.813 Malloc1 00:21:04.070 [2024-10-08 18:28:22.111971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.070 Malloc2 00:21:04.070 Malloc3 00:21:04.070 Malloc4 00:21:04.070 Malloc5 00:21:04.070 Malloc6 00:21:04.328 Malloc7 00:21:04.329 Malloc8 00:21:04.329 Malloc9 00:21:04.329 Malloc10 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1295475 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1295475 /var/tmp/bdevperf.sock 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1295475 ']' 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:04.329 { 00:21:04.329 "params": { 00:21:04.329 "name": "Nvme$subsystem", 00:21:04.329 "trtype": "$TEST_TRANSPORT", 00:21:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.329 "adrfam": "ipv4", 00:21:04.329 "trsvcid": "$NVMF_PORT", 00:21:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.329 "hdgst": ${hdgst:-false}, 00:21:04.329 "ddgst": ${ddgst:-false} 00:21:04.329 }, 00:21:04.329 "method": "bdev_nvme_attach_controller" 00:21:04.329 } 00:21:04.329 EOF 00:21:04.329 )") 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:04.329 { 00:21:04.329 "params": { 00:21:04.329 "name": "Nvme$subsystem", 00:21:04.329 "trtype": "$TEST_TRANSPORT", 00:21:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.329 "adrfam": "ipv4", 00:21:04.329 "trsvcid": "$NVMF_PORT", 00:21:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.329 "hdgst": ${hdgst:-false}, 00:21:04.329 "ddgst": ${ddgst:-false} 00:21:04.329 }, 00:21:04.329 "method": "bdev_nvme_attach_controller" 00:21:04.329 } 00:21:04.329 EOF 00:21:04.329 )") 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:04.329 { 00:21:04.329 "params": { 00:21:04.329 "name": "Nvme$subsystem", 00:21:04.329 "trtype": "$TEST_TRANSPORT", 00:21:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.329 "adrfam": "ipv4", 00:21:04.329 "trsvcid": "$NVMF_PORT", 00:21:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.329 "hdgst": ${hdgst:-false}, 00:21:04.329 "ddgst": ${ddgst:-false} 00:21:04.329 }, 00:21:04.329 "method": "bdev_nvme_attach_controller" 00:21:04.329 } 00:21:04.329 EOF 00:21:04.329 )") 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:04.329 { 00:21:04.329 "params": { 00:21:04.329 "name": "Nvme$subsystem", 00:21:04.329 "trtype": "$TEST_TRANSPORT", 00:21:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.329 "adrfam": "ipv4", 00:21:04.329 "trsvcid": "$NVMF_PORT", 00:21:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.329 "hdgst": ${hdgst:-false}, 00:21:04.329 "ddgst": ${ddgst:-false} 00:21:04.329 }, 00:21:04.329 "method": "bdev_nvme_attach_controller" 00:21:04.329 } 00:21:04.329 EOF 00:21:04.329 )") 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:04.329 { 00:21:04.329 "params": { 00:21:04.329 "name": "Nvme$subsystem", 00:21:04.329 "trtype": "$TEST_TRANSPORT", 00:21:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.329 "adrfam": "ipv4", 00:21:04.329 "trsvcid": "$NVMF_PORT", 00:21:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.329 "hdgst": ${hdgst:-false}, 00:21:04.329 "ddgst": ${ddgst:-false} 00:21:04.329 }, 00:21:04.329 "method": "bdev_nvme_attach_controller" 00:21:04.329 } 00:21:04.329 EOF 00:21:04.329 )") 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:04.329 { 00:21:04.329 "params": { 00:21:04.329 "name": "Nvme$subsystem", 00:21:04.329 "trtype": "$TEST_TRANSPORT", 00:21:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.329 "adrfam": "ipv4", 00:21:04.329 "trsvcid": "$NVMF_PORT", 00:21:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.329 "hdgst": ${hdgst:-false}, 00:21:04.329 "ddgst": ${ddgst:-false} 00:21:04.329 }, 00:21:04.329 "method": "bdev_nvme_attach_controller" 00:21:04.329 } 00:21:04.329 EOF 00:21:04.329 )") 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:04.329 { 00:21:04.329 "params": { 00:21:04.329 "name": "Nvme$subsystem", 00:21:04.329 "trtype": "$TEST_TRANSPORT", 00:21:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.329 "adrfam": "ipv4", 00:21:04.329 "trsvcid": "$NVMF_PORT", 00:21:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.329 "hdgst": ${hdgst:-false}, 00:21:04.329 "ddgst": ${ddgst:-false} 00:21:04.329 }, 00:21:04.329 "method": "bdev_nvme_attach_controller" 00:21:04.329 } 00:21:04.329 EOF 00:21:04.329 )") 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:04.329 { 00:21:04.329 "params": { 00:21:04.329 "name": "Nvme$subsystem", 00:21:04.329 "trtype": "$TEST_TRANSPORT", 00:21:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.329 "adrfam": "ipv4", 00:21:04.329 "trsvcid": "$NVMF_PORT", 00:21:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.329 "hdgst": ${hdgst:-false}, 00:21:04.329 "ddgst": ${ddgst:-false} 00:21:04.329 }, 00:21:04.329 "method": "bdev_nvme_attach_controller" 00:21:04.329 } 00:21:04.329 EOF 00:21:04.329 )") 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:04.329 { 00:21:04.329 "params": { 00:21:04.329 "name": "Nvme$subsystem", 00:21:04.329 "trtype": "$TEST_TRANSPORT", 00:21:04.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.329 "adrfam": "ipv4", 00:21:04.329 "trsvcid": "$NVMF_PORT", 00:21:04.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.329 "hdgst": ${hdgst:-false}, 00:21:04.329 "ddgst": ${ddgst:-false} 00:21:04.329 }, 00:21:04.329 "method": "bdev_nvme_attach_controller" 00:21:04.329 } 00:21:04.329 EOF 00:21:04.329 )") 00:21:04.329 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:04.330 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:04.330 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:04.330 { 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme$subsystem", 00:21:04.330 "trtype": "$TEST_TRANSPORT", 00:21:04.330 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "$NVMF_PORT", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:04.330 "hdgst": ${hdgst:-false}, 00:21:04.330 "ddgst": ${ddgst:-false} 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 } 00:21:04.330 EOF 00:21:04.330 )") 00:21:04.330 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:04.330 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:21:04.330 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:21:04.330 18:28:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme1", 00:21:04.330 "trtype": "tcp", 00:21:04.330 "traddr": "10.0.0.2", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "4420", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.330 "hdgst": false, 00:21:04.330 "ddgst": false 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 },{ 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme2", 00:21:04.330 "trtype": "tcp", 00:21:04.330 "traddr": "10.0.0.2", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "4420", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:04.330 "hdgst": false, 00:21:04.330 "ddgst": false 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 },{ 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme3", 00:21:04.330 "trtype": "tcp", 00:21:04.330 "traddr": "10.0.0.2", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "4420", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:04.330 "hdgst": false, 00:21:04.330 "ddgst": false 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 },{ 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme4", 00:21:04.330 "trtype": "tcp", 00:21:04.330 "traddr": "10.0.0.2", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "4420", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:04.330 "hdgst": false, 00:21:04.330 "ddgst": false 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 },{ 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme5", 00:21:04.330 "trtype": "tcp", 00:21:04.330 "traddr": "10.0.0.2", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "4420", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:04.330 "hdgst": false, 00:21:04.330 "ddgst": false 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 },{ 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme6", 00:21:04.330 "trtype": "tcp", 00:21:04.330 "traddr": "10.0.0.2", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "4420", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:04.330 "hdgst": false, 00:21:04.330 "ddgst": false 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 },{ 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme7", 00:21:04.330 "trtype": "tcp", 00:21:04.330 "traddr": "10.0.0.2", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "4420", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:04.330 "hdgst": false, 00:21:04.330 "ddgst": false 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 },{ 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme8", 00:21:04.330 "trtype": "tcp", 00:21:04.330 "traddr": "10.0.0.2", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "4420", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:04.330 "hdgst": false, 00:21:04.330 "ddgst": false 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 },{ 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme9", 00:21:04.330 "trtype": "tcp", 00:21:04.330 "traddr": "10.0.0.2", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "4420", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:04.330 "hdgst": false, 00:21:04.330 "ddgst": false 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 },{ 00:21:04.330 "params": { 00:21:04.330 "name": "Nvme10", 00:21:04.330 "trtype": "tcp", 00:21:04.330 "traddr": "10.0.0.2", 00:21:04.330 "adrfam": "ipv4", 00:21:04.330 "trsvcid": "4420", 00:21:04.330 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:04.330 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:04.330 "hdgst": false, 00:21:04.330 "ddgst": false 00:21:04.330 }, 00:21:04.330 "method": "bdev_nvme_attach_controller" 00:21:04.330 }' 00:21:04.587 [2024-10-08 18:28:22.617691] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:04.587 [2024-10-08 18:28:22.617779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:04.587 [2024-10-08 18:28:22.683072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.587 [2024-10-08 18:28:22.796705] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.485 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:06.485 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:06.485 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:06.485 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.485 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:06.485 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.485 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1295475 00:21:06.485 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:06.485 18:28:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:07.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1295475 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1295412 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.418 { 00:21:07.418 "params": { 00:21:07.418 "name": "Nvme$subsystem", 00:21:07.418 "trtype": "$TEST_TRANSPORT", 00:21:07.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.418 "adrfam": "ipv4", 00:21:07.418 "trsvcid": "$NVMF_PORT", 00:21:07.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.418 "hdgst": ${hdgst:-false}, 00:21:07.418 "ddgst": ${ddgst:-false} 00:21:07.418 }, 00:21:07.418 "method": "bdev_nvme_attach_controller" 00:21:07.418 } 00:21:07.418 EOF 00:21:07.418 )") 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.418 { 00:21:07.418 "params": { 00:21:07.418 "name": "Nvme$subsystem", 00:21:07.418 "trtype": "$TEST_TRANSPORT", 00:21:07.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.418 "adrfam": "ipv4", 00:21:07.418 "trsvcid": "$NVMF_PORT", 00:21:07.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.418 "hdgst": ${hdgst:-false}, 00:21:07.418 "ddgst": ${ddgst:-false} 00:21:07.418 }, 00:21:07.418 "method": "bdev_nvme_attach_controller" 00:21:07.418 } 00:21:07.418 EOF 00:21:07.418 )") 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.418 { 00:21:07.418 "params": { 00:21:07.418 "name": "Nvme$subsystem", 00:21:07.418 "trtype": "$TEST_TRANSPORT", 00:21:07.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.418 "adrfam": "ipv4", 00:21:07.418 "trsvcid": "$NVMF_PORT", 00:21:07.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.418 "hdgst": ${hdgst:-false}, 00:21:07.418 "ddgst": ${ddgst:-false} 00:21:07.418 }, 00:21:07.418 "method": "bdev_nvme_attach_controller" 00:21:07.418 } 00:21:07.418 EOF 00:21:07.418 )") 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.418 { 00:21:07.418 "params": { 00:21:07.418 "name": "Nvme$subsystem", 00:21:07.418 "trtype": "$TEST_TRANSPORT", 00:21:07.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.418 "adrfam": "ipv4", 00:21:07.418 "trsvcid": "$NVMF_PORT", 00:21:07.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.418 "hdgst": ${hdgst:-false}, 00:21:07.418 "ddgst": ${ddgst:-false} 00:21:07.418 }, 00:21:07.418 "method": "bdev_nvme_attach_controller" 00:21:07.418 } 00:21:07.418 EOF 00:21:07.418 )") 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.418 { 00:21:07.418 "params": { 00:21:07.418 "name": "Nvme$subsystem", 00:21:07.418 "trtype": "$TEST_TRANSPORT", 00:21:07.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.418 "adrfam": "ipv4", 00:21:07.418 "trsvcid": "$NVMF_PORT", 00:21:07.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.418 "hdgst": ${hdgst:-false}, 00:21:07.418 "ddgst": ${ddgst:-false} 00:21:07.418 }, 00:21:07.418 "method": "bdev_nvme_attach_controller" 00:21:07.418 } 00:21:07.418 EOF 00:21:07.418 )") 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.418 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.418 { 00:21:07.418 "params": { 00:21:07.418 "name": "Nvme$subsystem", 00:21:07.418 "trtype": "$TEST_TRANSPORT", 00:21:07.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.418 "adrfam": "ipv4", 00:21:07.418 "trsvcid": "$NVMF_PORT", 00:21:07.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.419 "hdgst": ${hdgst:-false}, 00:21:07.419 "ddgst": ${ddgst:-false} 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 } 00:21:07.419 EOF 00:21:07.419 )") 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.419 { 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme$subsystem", 00:21:07.419 "trtype": "$TEST_TRANSPORT", 00:21:07.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "$NVMF_PORT", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.419 "hdgst": ${hdgst:-false}, 00:21:07.419 "ddgst": ${ddgst:-false} 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 } 00:21:07.419 EOF 00:21:07.419 )") 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.419 { 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme$subsystem", 00:21:07.419 "trtype": "$TEST_TRANSPORT", 00:21:07.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "$NVMF_PORT", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.419 "hdgst": ${hdgst:-false}, 00:21:07.419 "ddgst": ${ddgst:-false} 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 } 00:21:07.419 EOF 00:21:07.419 )") 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.419 { 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme$subsystem", 00:21:07.419 "trtype": "$TEST_TRANSPORT", 00:21:07.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "$NVMF_PORT", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.419 "hdgst": ${hdgst:-false}, 00:21:07.419 "ddgst": ${ddgst:-false} 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 } 00:21:07.419 EOF 00:21:07.419 )") 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:07.419 { 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme$subsystem", 00:21:07.419 "trtype": "$TEST_TRANSPORT", 00:21:07.419 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "$NVMF_PORT", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.419 "hdgst": ${hdgst:-false}, 00:21:07.419 "ddgst": ${ddgst:-false} 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 } 00:21:07.419 EOF 00:21:07.419 )") 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:21:07.419 18:28:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme1", 00:21:07.419 "trtype": "tcp", 00:21:07.419 "traddr": "10.0.0.2", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "4420", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.419 "hdgst": false, 00:21:07.419 "ddgst": false 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 },{ 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme2", 00:21:07.419 "trtype": "tcp", 00:21:07.419 "traddr": "10.0.0.2", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "4420", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:07.419 "hdgst": false, 00:21:07.419 "ddgst": false 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 },{ 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme3", 00:21:07.419 "trtype": "tcp", 00:21:07.419 "traddr": "10.0.0.2", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "4420", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:07.419 "hdgst": false, 00:21:07.419 "ddgst": false 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 },{ 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme4", 00:21:07.419 "trtype": "tcp", 00:21:07.419 "traddr": "10.0.0.2", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "4420", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:07.419 "hdgst": false, 00:21:07.419 "ddgst": false 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 },{ 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme5", 00:21:07.419 "trtype": "tcp", 00:21:07.419 "traddr": "10.0.0.2", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "4420", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:07.419 "hdgst": false, 00:21:07.419 "ddgst": false 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 },{ 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme6", 00:21:07.419 "trtype": "tcp", 00:21:07.419 "traddr": "10.0.0.2", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "4420", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:07.419 "hdgst": false, 00:21:07.419 "ddgst": false 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 },{ 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme7", 00:21:07.419 "trtype": "tcp", 00:21:07.419 "traddr": "10.0.0.2", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "4420", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:07.419 "hdgst": false, 00:21:07.419 "ddgst": false 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 },{ 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme8", 00:21:07.419 "trtype": "tcp", 00:21:07.419 "traddr": "10.0.0.2", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "4420", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:07.419 "hdgst": false, 00:21:07.419 "ddgst": false 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 },{ 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme9", 00:21:07.419 "trtype": "tcp", 00:21:07.419 "traddr": "10.0.0.2", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "4420", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:07.419 "hdgst": false, 00:21:07.419 "ddgst": false 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 },{ 00:21:07.419 "params": { 00:21:07.419 "name": "Nvme10", 00:21:07.419 "trtype": "tcp", 00:21:07.419 "traddr": "10.0.0.2", 00:21:07.419 "adrfam": "ipv4", 00:21:07.419 "trsvcid": "4420", 00:21:07.419 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:07.419 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:07.419 "hdgst": false, 00:21:07.419 "ddgst": false 00:21:07.419 }, 00:21:07.419 "method": "bdev_nvme_attach_controller" 00:21:07.419 }' 00:21:07.419 [2024-10-08 18:28:25.688255] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:07.419 [2024-10-08 18:28:25.688361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295896 ] 00:21:07.677 [2024-10-08 18:28:25.754572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.677 [2024-10-08 18:28:25.866398] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.575 Running I/O for 1 seconds... 00:21:10.509 1746.00 IOPS, 109.12 MiB/s 00:21:10.509 Latency(us) 00:21:10.509 [2024-10-08T16:28:28.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.509 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.509 Verification LBA range: start 0x0 length 0x400 00:21:10.509 Nvme1n1 : 1.15 223.48 13.97 0.00 0.00 283566.84 36117.62 246997.90 00:21:10.509 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.509 Verification LBA range: start 0x0 length 0x400 00:21:10.509 Nvme2n1 : 1.13 229.15 14.32 0.00 0.00 270221.55 4538.97 254765.13 00:21:10.509 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.509 Verification LBA range: start 0x0 length 0x400 00:21:10.509 Nvme3n1 : 1.11 233.49 14.59 0.00 0.00 261112.41 6844.87 257872.02 00:21:10.509 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.509 Verification LBA range: start 0x0 length 0x400 00:21:10.509 Nvme4n1 : 1.12 229.36 14.33 0.00 0.00 262555.69 17282.09 262532.36 00:21:10.509 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.509 Verification LBA range: start 0x0 length 0x400 00:21:10.509 Nvme5n1 : 1.13 226.82 14.18 0.00 0.00 260935.49 18447.17 257872.02 00:21:10.509 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.509 Verification LBA range: start 0x0 length 0x400 00:21:10.509 Nvme6n1 : 1.15 226.49 14.16 0.00 0.00 256304.20 6213.78 264085.81 00:21:10.509 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.509 Verification LBA range: start 0x0 length 0x400 00:21:10.509 Nvme7n1 : 1.14 229.39 14.34 0.00 0.00 248496.15 4004.98 265639.25 00:21:10.509 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.509 Verification LBA range: start 0x0 length 0x400 00:21:10.509 Nvme8n1 : 1.15 225.32 14.08 0.00 0.00 249480.80 1626.26 282727.16 00:21:10.509 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.509 Verification LBA range: start 0x0 length 0x400 00:21:10.509 Nvme9n1 : 1.16 220.75 13.80 0.00 0.00 250744.41 20291.89 279620.27 00:21:10.509 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:10.509 Verification LBA range: start 0x0 length 0x400 00:21:10.509 Nvme10n1 : 1.20 267.33 16.71 0.00 0.00 204514.00 3058.35 284280.60 00:21:10.509 [2024-10-08T16:28:28.796Z] =================================================================================================================== 00:21:10.509 [2024-10-08T16:28:28.796Z] Total : 2311.59 144.47 0.00 0.00 253591.81 1626.26 284280.60 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:10.767 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:10.767 rmmod nvme_tcp 00:21:11.024 rmmod nvme_fabrics 00:21:11.024 rmmod nvme_keyring 00:21:11.024 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.024 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1295412 ']' 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1295412 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1295412 ']' 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1295412 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1295412 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1295412' 00:21:11.025 killing process with pid 1295412 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1295412 00:21:11.025 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1295412 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.592 18:28:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:13.499 00:21:13.499 real 0m12.328s 00:21:13.499 user 0m36.293s 00:21:13.499 sys 0m3.246s 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:13.499 ************************************ 00:21:13.499 END TEST nvmf_shutdown_tc1 00:21:13.499 ************************************ 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:13.499 ************************************ 00:21:13.499 START TEST nvmf_shutdown_tc2 00:21:13.499 ************************************ 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:13.499 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:13.499 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:13.499 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:13.500 Found net devices under 0000:09:00.0: cvl_0_0 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:13.500 Found net devices under 0000:09:00.1: cvl_0_1 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.500 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:13.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:21:13.758 00:21:13.758 --- 10.0.0.2 ping statistics --- 00:21:13.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.758 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:21:13.758 00:21:13.758 --- 10.0.0.1 ping statistics --- 00:21:13.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.758 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1296780 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1296780 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1296780 ']' 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.758 18:28:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.758 [2024-10-08 18:28:31.982253] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:13.758 [2024-10-08 18:28:31.982369] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.016 [2024-10-08 18:28:32.046643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.016 [2024-10-08 18:28:32.149216] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.016 [2024-10-08 18:28:32.149273] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.016 [2024-10-08 18:28:32.149308] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.016 [2024-10-08 18:28:32.149319] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.016 [2024-10-08 18:28:32.149328] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.016 [2024-10-08 18:28:32.150783] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.016 [2024-10-08 18:28:32.150844] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.016 [2024-10-08 18:28:32.150959] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:21:14.016 [2024-10-08 18:28:32.150964] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.016 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.016 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:14.016 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:14.016 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:14.017 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.017 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.017 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:14.017 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.017 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.017 [2024-10-08 18:28:32.300791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.275 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.275 Malloc1 00:21:14.275 [2024-10-08 18:28:32.376069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.275 Malloc2 00:21:14.275 Malloc3 00:21:14.275 Malloc4 00:21:14.275 Malloc5 00:21:14.533 Malloc6 00:21:14.533 Malloc7 00:21:14.533 Malloc8 00:21:14.533 Malloc9 00:21:14.533 Malloc10 00:21:14.533 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.533 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:14.533 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:14.533 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1296849 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1296849 /var/tmp/bdevperf.sock 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1296849 ']' 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.791 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.791 { 00:21:14.791 "params": { 00:21:14.791 "name": "Nvme$subsystem", 00:21:14.792 "trtype": "$TEST_TRANSPORT", 00:21:14.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "$NVMF_PORT", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.792 "hdgst": ${hdgst:-false}, 00:21:14.792 "ddgst": ${ddgst:-false} 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 } 00:21:14.792 EOF 00:21:14.792 )") 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.792 { 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme$subsystem", 00:21:14.792 "trtype": "$TEST_TRANSPORT", 00:21:14.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "$NVMF_PORT", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.792 "hdgst": ${hdgst:-false}, 00:21:14.792 "ddgst": ${ddgst:-false} 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 } 00:21:14.792 EOF 00:21:14.792 )") 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.792 { 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme$subsystem", 00:21:14.792 "trtype": "$TEST_TRANSPORT", 00:21:14.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "$NVMF_PORT", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.792 "hdgst": ${hdgst:-false}, 00:21:14.792 "ddgst": ${ddgst:-false} 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 } 00:21:14.792 EOF 00:21:14.792 )") 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.792 { 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme$subsystem", 00:21:14.792 "trtype": "$TEST_TRANSPORT", 00:21:14.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "$NVMF_PORT", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.792 "hdgst": ${hdgst:-false}, 00:21:14.792 "ddgst": ${ddgst:-false} 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 } 00:21:14.792 EOF 00:21:14.792 )") 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.792 { 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme$subsystem", 00:21:14.792 "trtype": "$TEST_TRANSPORT", 00:21:14.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "$NVMF_PORT", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.792 "hdgst": ${hdgst:-false}, 00:21:14.792 "ddgst": ${ddgst:-false} 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 } 00:21:14.792 EOF 00:21:14.792 )") 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.792 { 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme$subsystem", 00:21:14.792 "trtype": "$TEST_TRANSPORT", 00:21:14.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "$NVMF_PORT", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.792 "hdgst": ${hdgst:-false}, 00:21:14.792 "ddgst": ${ddgst:-false} 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 } 00:21:14.792 EOF 00:21:14.792 )") 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.792 { 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme$subsystem", 00:21:14.792 "trtype": "$TEST_TRANSPORT", 00:21:14.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "$NVMF_PORT", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.792 "hdgst": ${hdgst:-false}, 00:21:14.792 "ddgst": ${ddgst:-false} 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 } 00:21:14.792 EOF 00:21:14.792 )") 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.792 { 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme$subsystem", 00:21:14.792 "trtype": "$TEST_TRANSPORT", 00:21:14.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "$NVMF_PORT", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.792 "hdgst": ${hdgst:-false}, 00:21:14.792 "ddgst": ${ddgst:-false} 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 } 00:21:14.792 EOF 00:21:14.792 )") 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.792 { 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme$subsystem", 00:21:14.792 "trtype": "$TEST_TRANSPORT", 00:21:14.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "$NVMF_PORT", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.792 "hdgst": ${hdgst:-false}, 00:21:14.792 "ddgst": ${ddgst:-false} 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 } 00:21:14.792 EOF 00:21:14.792 )") 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:14.792 { 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme$subsystem", 00:21:14.792 "trtype": "$TEST_TRANSPORT", 00:21:14.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "$NVMF_PORT", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:14.792 "hdgst": ${hdgst:-false}, 00:21:14.792 "ddgst": ${ddgst:-false} 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 } 00:21:14.792 EOF 00:21:14.792 )") 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:21:14.792 18:28:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme1", 00:21:14.792 "trtype": "tcp", 00:21:14.792 "traddr": "10.0.0.2", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "4420", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.792 "hdgst": false, 00:21:14.792 "ddgst": false 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 },{ 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme2", 00:21:14.792 "trtype": "tcp", 00:21:14.792 "traddr": "10.0.0.2", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "4420", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:14.792 "hdgst": false, 00:21:14.792 "ddgst": false 00:21:14.792 }, 00:21:14.792 "method": "bdev_nvme_attach_controller" 00:21:14.792 },{ 00:21:14.792 "params": { 00:21:14.792 "name": "Nvme3", 00:21:14.792 "trtype": "tcp", 00:21:14.792 "traddr": "10.0.0.2", 00:21:14.792 "adrfam": "ipv4", 00:21:14.792 "trsvcid": "4420", 00:21:14.792 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:14.792 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:14.793 "hdgst": false, 00:21:14.793 "ddgst": false 00:21:14.793 }, 00:21:14.793 "method": "bdev_nvme_attach_controller" 00:21:14.793 },{ 00:21:14.793 "params": { 00:21:14.793 "name": "Nvme4", 00:21:14.793 "trtype": "tcp", 00:21:14.793 "traddr": "10.0.0.2", 00:21:14.793 "adrfam": "ipv4", 00:21:14.793 "trsvcid": "4420", 00:21:14.793 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:14.793 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:14.793 "hdgst": false, 00:21:14.793 "ddgst": false 00:21:14.793 }, 00:21:14.793 "method": "bdev_nvme_attach_controller" 00:21:14.793 },{ 00:21:14.793 "params": { 00:21:14.793 "name": "Nvme5", 00:21:14.793 "trtype": "tcp", 00:21:14.793 "traddr": "10.0.0.2", 00:21:14.793 "adrfam": "ipv4", 00:21:14.793 "trsvcid": "4420", 00:21:14.793 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:14.793 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:14.793 "hdgst": false, 00:21:14.793 "ddgst": false 00:21:14.793 }, 00:21:14.793 "method": "bdev_nvme_attach_controller" 00:21:14.793 },{ 00:21:14.793 "params": { 00:21:14.793 "name": "Nvme6", 00:21:14.793 "trtype": "tcp", 00:21:14.793 "traddr": "10.0.0.2", 00:21:14.793 "adrfam": "ipv4", 00:21:14.793 "trsvcid": "4420", 00:21:14.793 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:14.793 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:14.793 "hdgst": false, 00:21:14.793 "ddgst": false 00:21:14.793 }, 00:21:14.793 "method": "bdev_nvme_attach_controller" 00:21:14.793 },{ 00:21:14.793 "params": { 00:21:14.793 "name": "Nvme7", 00:21:14.793 "trtype": "tcp", 00:21:14.793 "traddr": "10.0.0.2", 00:21:14.793 "adrfam": "ipv4", 00:21:14.793 "trsvcid": "4420", 00:21:14.793 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:14.793 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:14.793 "hdgst": false, 00:21:14.793 "ddgst": false 00:21:14.793 }, 00:21:14.793 "method": "bdev_nvme_attach_controller" 00:21:14.793 },{ 00:21:14.793 "params": { 00:21:14.793 "name": "Nvme8", 00:21:14.793 "trtype": "tcp", 00:21:14.793 "traddr": "10.0.0.2", 00:21:14.793 "adrfam": "ipv4", 00:21:14.793 "trsvcid": "4420", 00:21:14.793 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:14.793 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:14.793 "hdgst": false, 00:21:14.793 "ddgst": false 00:21:14.793 }, 00:21:14.793 "method": "bdev_nvme_attach_controller" 00:21:14.793 },{ 00:21:14.793 "params": { 00:21:14.793 "name": "Nvme9", 00:21:14.793 "trtype": "tcp", 00:21:14.793 "traddr": "10.0.0.2", 00:21:14.793 "adrfam": "ipv4", 00:21:14.793 "trsvcid": "4420", 00:21:14.793 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:14.793 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:14.793 "hdgst": false, 00:21:14.793 "ddgst": false 00:21:14.793 }, 00:21:14.793 "method": "bdev_nvme_attach_controller" 00:21:14.793 },{ 00:21:14.793 "params": { 00:21:14.793 "name": "Nvme10", 00:21:14.793 "trtype": "tcp", 00:21:14.793 "traddr": "10.0.0.2", 00:21:14.793 "adrfam": "ipv4", 00:21:14.793 "trsvcid": "4420", 00:21:14.793 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:14.793 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:14.793 "hdgst": false, 00:21:14.793 "ddgst": false 00:21:14.793 }, 00:21:14.793 "method": "bdev_nvme_attach_controller" 00:21:14.793 }' 00:21:14.793 [2024-10-08 18:28:32.873519] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:14.793 [2024-10-08 18:28:32.873612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296849 ] 00:21:14.793 [2024-10-08 18:28:32.938747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.793 [2024-10-08 18:28:33.051991] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.691 Running I/O for 10 seconds... 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:16.691 18:28:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:16.949 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:16.949 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:16.949 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:16.949 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:16.949 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.949 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1296849 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1296849 ']' 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1296849 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1296849 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1296849' 00:21:17.207 killing process with pid 1296849 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1296849 00:21:17.207 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1296849 00:21:17.207 Received shutdown signal, test time was about 0.873370 seconds 00:21:17.207 00:21:17.207 Latency(us) 00:21:17.207 [2024-10-08T16:28:35.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.207 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.207 Verification LBA range: start 0x0 length 0x400 00:21:17.207 Nvme1n1 : 0.84 227.99 14.25 0.00 0.00 275196.59 32234.00 242337.56 00:21:17.207 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.207 Verification LBA range: start 0x0 length 0x400 00:21:17.207 Nvme2n1 : 0.85 225.91 14.12 0.00 0.00 273615.33 21068.61 264085.81 00:21:17.207 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.207 Verification LBA range: start 0x0 length 0x400 00:21:17.207 Nvme3n1 : 0.84 228.63 14.29 0.00 0.00 263766.98 17476.27 256318.58 00:21:17.207 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.207 Verification LBA range: start 0x0 length 0x400 00:21:17.207 Nvme4n1 : 0.83 232.48 14.53 0.00 0.00 253349.23 20291.89 264085.81 00:21:17.207 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.207 Verification LBA range: start 0x0 length 0x400 00:21:17.207 Nvme5n1 : 0.86 223.35 13.96 0.00 0.00 258607.79 35535.08 243891.01 00:21:17.207 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.207 Verification LBA range: start 0x0 length 0x400 00:21:17.207 Nvme6n1 : 0.83 236.28 14.77 0.00 0.00 235918.63 10145.94 262532.36 00:21:17.207 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.207 Verification LBA range: start 0x0 length 0x400 00:21:17.207 Nvme7n1 : 0.85 224.68 14.04 0.00 0.00 244536.26 22233.69 262532.36 00:21:17.207 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.207 Verification LBA range: start 0x0 length 0x400 00:21:17.207 Nvme8n1 : 0.86 222.53 13.91 0.00 0.00 241413.18 20583.16 262532.36 00:21:17.207 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.207 Verification LBA range: start 0x0 length 0x400 00:21:17.207 Nvme9n1 : 0.87 220.06 13.75 0.00 0.00 238992.24 21068.61 290494.39 00:21:17.207 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.207 Verification LBA range: start 0x0 length 0x400 00:21:17.207 Nvme10n1 : 0.87 221.07 13.82 0.00 0.00 231827.28 20971.52 270299.59 00:21:17.207 [2024-10-08T16:28:35.494Z] =================================================================================================================== 00:21:17.207 [2024-10-08T16:28:35.494Z] Total : 2262.99 141.44 0.00 0.00 251681.30 10145.94 290494.39 00:21:17.465 18:28:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1296780 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:18.836 rmmod nvme_tcp 00:21:18.836 rmmod nvme_fabrics 00:21:18.836 rmmod nvme_keyring 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1296780 ']' 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1296780 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1296780 ']' 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1296780 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1296780 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1296780' 00:21:18.836 killing process with pid 1296780 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1296780 00:21:18.836 18:28:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1296780 00:21:19.094 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:19.094 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:19.095 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:19.095 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:19.095 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:21:19.095 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:21:19.095 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:19.095 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.095 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.095 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.095 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.095 18:28:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:21.630 00:21:21.630 real 0m7.681s 00:21:21.630 user 0m23.106s 00:21:21.630 sys 0m1.475s 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:21.630 ************************************ 00:21:21.630 END TEST nvmf_shutdown_tc2 00:21:21.630 ************************************ 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.630 ************************************ 00:21:21.630 START TEST nvmf_shutdown_tc3 00:21:21.630 ************************************ 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:21.630 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:21.631 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:21.631 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:21.631 Found net devices under 0000:09:00.0: cvl_0_0 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:21.631 Found net devices under 0000:09:00.1: cvl_0_1 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:21.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:21:21.631 00:21:21.631 --- 10.0.0.2 ping statistics --- 00:21:21.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.631 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:21:21.631 00:21:21.631 --- 10.0.0.1 ping statistics --- 00:21:21.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.631 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1297761 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1297761 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1297761 ']' 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.631 18:28:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.631 [2024-10-08 18:28:39.691759] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:21.631 [2024-10-08 18:28:39.691852] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.632 [2024-10-08 18:28:39.763811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.632 [2024-10-08 18:28:39.875663] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.632 [2024-10-08 18:28:39.875739] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.632 [2024-10-08 18:28:39.875753] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.632 [2024-10-08 18:28:39.875764] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.632 [2024-10-08 18:28:39.875773] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.632 [2024-10-08 18:28:39.877517] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.632 [2024-10-08 18:28:39.877619] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:21:21.632 [2024-10-08 18:28:39.877622] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.632 [2024-10-08 18:28:39.877544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.890 [2024-10-08 18:28:40.037882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.890 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.890 Malloc1 00:21:21.890 [2024-10-08 18:28:40.127678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.890 Malloc2 00:21:22.148 Malloc3 00:21:22.148 Malloc4 00:21:22.148 Malloc5 00:21:22.148 Malloc6 00:21:22.148 Malloc7 00:21:22.406 Malloc8 00:21:22.406 Malloc9 00:21:22.406 Malloc10 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1297935 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1297935 /var/tmp/bdevperf.sock 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1297935 ']' 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.407 { 00:21:22.407 "params": { 00:21:22.407 "name": "Nvme$subsystem", 00:21:22.407 "trtype": "$TEST_TRANSPORT", 00:21:22.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.407 "adrfam": "ipv4", 00:21:22.407 "trsvcid": "$NVMF_PORT", 00:21:22.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.407 "hdgst": ${hdgst:-false}, 00:21:22.407 "ddgst": ${ddgst:-false} 00:21:22.407 }, 00:21:22.407 "method": "bdev_nvme_attach_controller" 00:21:22.407 } 00:21:22.407 EOF 00:21:22.407 )") 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.407 { 00:21:22.407 "params": { 00:21:22.407 "name": "Nvme$subsystem", 00:21:22.407 "trtype": "$TEST_TRANSPORT", 00:21:22.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.407 "adrfam": "ipv4", 00:21:22.407 "trsvcid": "$NVMF_PORT", 00:21:22.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.407 "hdgst": ${hdgst:-false}, 00:21:22.407 "ddgst": ${ddgst:-false} 00:21:22.407 }, 00:21:22.407 "method": "bdev_nvme_attach_controller" 00:21:22.407 } 00:21:22.407 EOF 00:21:22.407 )") 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.407 { 00:21:22.407 "params": { 00:21:22.407 "name": "Nvme$subsystem", 00:21:22.407 "trtype": "$TEST_TRANSPORT", 00:21:22.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.407 "adrfam": "ipv4", 00:21:22.407 "trsvcid": "$NVMF_PORT", 00:21:22.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.407 "hdgst": ${hdgst:-false}, 00:21:22.407 "ddgst": ${ddgst:-false} 00:21:22.407 }, 00:21:22.407 "method": "bdev_nvme_attach_controller" 00:21:22.407 } 00:21:22.407 EOF 00:21:22.407 )") 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.407 { 00:21:22.407 "params": { 00:21:22.407 "name": "Nvme$subsystem", 00:21:22.407 "trtype": "$TEST_TRANSPORT", 00:21:22.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.407 "adrfam": "ipv4", 00:21:22.407 "trsvcid": "$NVMF_PORT", 00:21:22.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.407 "hdgst": ${hdgst:-false}, 00:21:22.407 "ddgst": ${ddgst:-false} 00:21:22.407 }, 00:21:22.407 "method": "bdev_nvme_attach_controller" 00:21:22.407 } 00:21:22.407 EOF 00:21:22.407 )") 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.407 { 00:21:22.407 "params": { 00:21:22.407 "name": "Nvme$subsystem", 00:21:22.407 "trtype": "$TEST_TRANSPORT", 00:21:22.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.407 "adrfam": "ipv4", 00:21:22.407 "trsvcid": "$NVMF_PORT", 00:21:22.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.407 "hdgst": ${hdgst:-false}, 00:21:22.407 "ddgst": ${ddgst:-false} 00:21:22.407 }, 00:21:22.407 "method": "bdev_nvme_attach_controller" 00:21:22.407 } 00:21:22.407 EOF 00:21:22.407 )") 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.407 { 00:21:22.407 "params": { 00:21:22.407 "name": "Nvme$subsystem", 00:21:22.407 "trtype": "$TEST_TRANSPORT", 00:21:22.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.407 "adrfam": "ipv4", 00:21:22.407 "trsvcid": "$NVMF_PORT", 00:21:22.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.407 "hdgst": ${hdgst:-false}, 00:21:22.407 "ddgst": ${ddgst:-false} 00:21:22.407 }, 00:21:22.407 "method": "bdev_nvme_attach_controller" 00:21:22.407 } 00:21:22.407 EOF 00:21:22.407 )") 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.407 { 00:21:22.407 "params": { 00:21:22.407 "name": "Nvme$subsystem", 00:21:22.407 "trtype": "$TEST_TRANSPORT", 00:21:22.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.407 "adrfam": "ipv4", 00:21:22.407 "trsvcid": "$NVMF_PORT", 00:21:22.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.407 "hdgst": ${hdgst:-false}, 00:21:22.407 "ddgst": ${ddgst:-false} 00:21:22.407 }, 00:21:22.407 "method": "bdev_nvme_attach_controller" 00:21:22.407 } 00:21:22.407 EOF 00:21:22.407 )") 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.407 { 00:21:22.407 "params": { 00:21:22.407 "name": "Nvme$subsystem", 00:21:22.407 "trtype": "$TEST_TRANSPORT", 00:21:22.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.407 "adrfam": "ipv4", 00:21:22.407 "trsvcid": "$NVMF_PORT", 00:21:22.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.407 "hdgst": ${hdgst:-false}, 00:21:22.407 "ddgst": ${ddgst:-false} 00:21:22.407 }, 00:21:22.407 "method": "bdev_nvme_attach_controller" 00:21:22.407 } 00:21:22.407 EOF 00:21:22.407 )") 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.407 { 00:21:22.407 "params": { 00:21:22.407 "name": "Nvme$subsystem", 00:21:22.407 "trtype": "$TEST_TRANSPORT", 00:21:22.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.407 "adrfam": "ipv4", 00:21:22.407 "trsvcid": "$NVMF_PORT", 00:21:22.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.407 "hdgst": ${hdgst:-false}, 00:21:22.407 "ddgst": ${ddgst:-false} 00:21:22.407 }, 00:21:22.407 "method": "bdev_nvme_attach_controller" 00:21:22.407 } 00:21:22.407 EOF 00:21:22.407 )") 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:22.407 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:22.408 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:22.408 { 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme$subsystem", 00:21:22.408 "trtype": "$TEST_TRANSPORT", 00:21:22.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "$NVMF_PORT", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.408 "hdgst": ${hdgst:-false}, 00:21:22.408 "ddgst": ${ddgst:-false} 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 } 00:21:22.408 EOF 00:21:22.408 )") 00:21:22.408 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:21:22.408 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:21:22.408 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:21:22.408 18:28:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme1", 00:21:22.408 "trtype": "tcp", 00:21:22.408 "traddr": "10.0.0.2", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "4420", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.408 "hdgst": false, 00:21:22.408 "ddgst": false 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 },{ 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme2", 00:21:22.408 "trtype": "tcp", 00:21:22.408 "traddr": "10.0.0.2", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "4420", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:22.408 "hdgst": false, 00:21:22.408 "ddgst": false 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 },{ 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme3", 00:21:22.408 "trtype": "tcp", 00:21:22.408 "traddr": "10.0.0.2", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "4420", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:22.408 "hdgst": false, 00:21:22.408 "ddgst": false 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 },{ 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme4", 00:21:22.408 "trtype": "tcp", 00:21:22.408 "traddr": "10.0.0.2", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "4420", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:22.408 "hdgst": false, 00:21:22.408 "ddgst": false 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 },{ 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme5", 00:21:22.408 "trtype": "tcp", 00:21:22.408 "traddr": "10.0.0.2", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "4420", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:22.408 "hdgst": false, 00:21:22.408 "ddgst": false 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 },{ 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme6", 00:21:22.408 "trtype": "tcp", 00:21:22.408 "traddr": "10.0.0.2", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "4420", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:22.408 "hdgst": false, 00:21:22.408 "ddgst": false 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 },{ 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme7", 00:21:22.408 "trtype": "tcp", 00:21:22.408 "traddr": "10.0.0.2", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "4420", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:22.408 "hdgst": false, 00:21:22.408 "ddgst": false 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 },{ 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme8", 00:21:22.408 "trtype": "tcp", 00:21:22.408 "traddr": "10.0.0.2", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "4420", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:22.408 "hdgst": false, 00:21:22.408 "ddgst": false 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 },{ 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme9", 00:21:22.408 "trtype": "tcp", 00:21:22.408 "traddr": "10.0.0.2", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "4420", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:22.408 "hdgst": false, 00:21:22.408 "ddgst": false 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 },{ 00:21:22.408 "params": { 00:21:22.408 "name": "Nvme10", 00:21:22.408 "trtype": "tcp", 00:21:22.408 "traddr": "10.0.0.2", 00:21:22.408 "adrfam": "ipv4", 00:21:22.408 "trsvcid": "4420", 00:21:22.408 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:22.408 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:22.408 "hdgst": false, 00:21:22.408 "ddgst": false 00:21:22.408 }, 00:21:22.408 "method": "bdev_nvme_attach_controller" 00:21:22.408 }' 00:21:22.408 [2024-10-08 18:28:40.663574] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:22.408 [2024-10-08 18:28:40.663688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297935 ] 00:21:22.666 [2024-10-08 18:28:40.726522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.666 [2024-10-08 18:28:40.839573] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.564 Running I/O for 10 seconds... 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:24.564 18:28:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=135 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1297761 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1297761 ']' 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1297761 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:24.822 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1297761 00:21:25.095 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:25.095 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:25.095 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1297761' 00:21:25.095 killing process with pid 1297761 00:21:25.095 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1297761 00:21:25.095 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1297761 00:21:25.095 [2024-10-08 18:28:43.128736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.128997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.129660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e260 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.131858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.131919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.131938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.131951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.095 [2024-10-08 18:28:43.131962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.131974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.131986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.131998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.132740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167e730 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.096 [2024-10-08 18:28:43.134655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.134997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.135009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.135021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.135033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.135045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167ec00 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.097 [2024-10-08 18:28:43.136843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.136999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f0f0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.137993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.138773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167f5c0 is same with the state(6) to be set 00:21:25.098 [2024-10-08 18:28:43.140887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.098 [2024-10-08 18:28:43.140929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.140953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.140967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.140981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.140996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca18d0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.141093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d0410 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.141283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c5d50 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.141469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c9230 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.141657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118a50 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.141846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.141972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.141986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca57b0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.142035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.142056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.142071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.142086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.142100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.142114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.142128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.099 [2024-10-08 18:28:43.142151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.099 [2024-10-08 18:28:43.142165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9c2a0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.142467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.142495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.142513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.142532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.142545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.142558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.142570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.142595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.099 [2024-10-08 18:28:43.142615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.142998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with [2024-10-08 18:28:43.143079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:12the state(6) to be set 00:21:25.100 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:12[2024-10-08 18:28:43.143234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:28:43.143248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140fea0 is same with the state(6) to be set 00:21:25.100 [2024-10-08 18:28:43.143376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.100 [2024-10-08 18:28:43.143645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.100 [2024-10-08 18:28:43.143675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.143706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.143734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.143763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.143792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.143821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.143849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.143877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.143906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.143940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.143969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.143984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.144406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:25.101 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.144427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:1the state(6) to be set 00:21:25.101 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.144491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:1the state(6) to be set 00:21:25.101 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.144555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:1the state(6) to be set 00:21:25.101 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.144601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:25.101 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.144641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:25.101 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 [2024-10-08 18:28:43.144678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.101 [2024-10-08 18:28:43.144690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:28:43.144702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.101 the state(6) to be set 00:21:25.101 [2024-10-08 18:28:43.144716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.144728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.144740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.144752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:28:43.144764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.144782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:1the state(6) to be set 00:21:25.102 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.144796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.144808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.144820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.144832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:1[2024-10-08 18:28:43.144845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.144858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:25.102 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.144873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.144885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.144897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.144908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.144922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.144935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:1the state(6) to be set 00:21:25.102 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.144948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.144961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:1[2024-10-08 18:28:43.144973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.144986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.144987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:25.102 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.145000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.145013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.145025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.145037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:28:43.145049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.145074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.145085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.145097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.145109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.145123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:1the state(6) to be set 00:21:25.102 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.145136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with [2024-10-08 18:28:43.145138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:25.102 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.145153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.145165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.145177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.145189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-08 18:28:43.145202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410370 is same with the state(6) to be set 00:21:25.102 [2024-10-08 18:28:43.145236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:25.102 [2024-10-08 18:28:43.145335] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20a5e10 was disconnected and freed. reset controller. 00:21:25.102 [2024-10-08 18:28:43.146553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.146589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.146610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.146626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.146643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.146658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.146673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.146697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.146713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.102 [2024-10-08 18:28:43.146727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.102 [2024-10-08 18:28:43.146743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.146764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.146782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.146797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.146813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.146827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.146850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.146865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.146881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.146895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.146911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.146926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.146941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.146955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.146971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.146985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.147972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.103 [2024-10-08 18:28:43.147985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.103 [2024-10-08 18:28:43.148001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:25.104 [2024-10-08 18:28:43.148834] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20a9d40 was disconnected and freed. reset controller. 00:21:25.104 [2024-10-08 18:28:43.148882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.148969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.148991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.104 [2024-10-08 18:28:43.149512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.104 [2024-10-08 18:28:43.149529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.149981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.149996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.105 [2024-10-08 18:28:43.150906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.105 [2024-10-08 18:28:43.150923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.150938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.150953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.150968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.150984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.150998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.151499] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20ab1e0 was disconnected and freed. reset controller. 00:21:25.106 [2024-10-08 18:28:43.151594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca18d0 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.151633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d0410 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.151664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c5d50 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.151701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c9230 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.151756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.151777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.151793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.151807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.151822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.151835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.151849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.151862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.151875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e1e0 is same with the state(6) to be set 00:21:25.106 [2024-10-08 18:28:43.151905] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2118a50 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.151955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.151996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.152016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.152030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.152045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.152058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.152072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.152085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.152098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8790 is same with the state(6) to be set 00:21:25.106 [2024-10-08 18:28:43.152130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca57b0 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.152159] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9c2a0 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.152208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.152229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.152245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.152259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.152297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.152315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.152329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:25.106 [2024-10-08 18:28:43.152343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.152356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e66d0 is same with the state(6) to be set 00:21:25.106 [2024-10-08 18:28:43.156302] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:25.106 [2024-10-08 18:28:43.156915] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:25.106 [2024-10-08 18:28:43.156946] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:25.106 [2024-10-08 18:28:43.156981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e66d0 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.157120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.106 [2024-10-08 18:28:43.157150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c9230 with addr=10.0.0.2, port=4420 00:21:25.106 [2024-10-08 18:28:43.157168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c9230 is same with the state(6) to be set 00:21:25.106 [2024-10-08 18:28:43.158573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.106 [2024-10-08 18:28:43.158612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2118a50 with addr=10.0.0.2, port=4420 00:21:25.106 [2024-10-08 18:28:43.158630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118a50 is same with the state(6) to be set 00:21:25.106 [2024-10-08 18:28:43.158662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c9230 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.158731] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:25.106 [2024-10-08 18:28:43.158805] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:25.106 [2024-10-08 18:28:43.158878] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:25.106 [2024-10-08 18:28:43.158945] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:25.106 [2024-10-08 18:28:43.159085] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:25.106 [2024-10-08 18:28:43.159298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.106 [2024-10-08 18:28:43.159330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e66d0 with addr=10.0.0.2, port=4420 00:21:25.106 [2024-10-08 18:28:43.159348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e66d0 is same with the state(6) to be set 00:21:25.106 [2024-10-08 18:28:43.159368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2118a50 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.159387] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:25.106 [2024-10-08 18:28:43.159402] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:25.106 [2024-10-08 18:28:43.159418] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:25.106 [2024-10-08 18:28:43.159579] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:25.106 [2024-10-08 18:28:43.159643] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.106 [2024-10-08 18:28:43.159679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e66d0 (9): Bad file descriptor 00:21:25.106 [2024-10-08 18:28:43.159699] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:25.106 [2024-10-08 18:28:43.159713] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:25.106 [2024-10-08 18:28:43.159726] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:25.106 [2024-10-08 18:28:43.159808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.159832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.159859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.159875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.159893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.159908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.159924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.159939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.159955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.159970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.159987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.160001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.160019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.160033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.160051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.160066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.160082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.160097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.106 [2024-10-08 18:28:43.160113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.106 [2024-10-08 18:28:43.160128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.160967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.160982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.107 [2024-10-08 18:28:43.161491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.107 [2024-10-08 18:28:43.161505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.161891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.161906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2149f00 is same with the state(6) to be set 00:21:25.108 [2024-10-08 18:28:43.161987] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2149f00 was disconnected and freed. reset controller. 00:21:25.108 [2024-10-08 18:28:43.162041] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.108 [2024-10-08 18:28:43.162075] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:25.108 [2024-10-08 18:28:43.162093] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:25.108 [2024-10-08 18:28:43.162107] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:25.108 [2024-10-08 18:28:43.162173] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:25.108 [2024-10-08 18:28:43.162204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0e1e0 (9): Bad file descriptor 00:21:25.108 [2024-10-08 18:28:43.162240] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c8790 (9): Bad file descriptor 00:21:25.108 [2024-10-08 18:28:43.163481] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.108 [2024-10-08 18:28:43.163521] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:25.108 [2024-10-08 18:28:43.163600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.163979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.163996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.164011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.164027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.164041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.164058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.164072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.164089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.164103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.164119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.164134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.164150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.164169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.164186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.164201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.164217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.108 [2024-10-08 18:28:43.164232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.108 [2024-10-08 18:28:43.164249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.164975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.164993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.109 [2024-10-08 18:28:43.165582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.109 [2024-10-08 18:28:43.165598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.165621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.165637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.165651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.165676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea98c0 is same with the state(6) to be set 00:21:25.110 [2024-10-08 18:28:43.166931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.166954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.166976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.166992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.167978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.167995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.168010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.168026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.168042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.168058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.168073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.168089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.168104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.168120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.168135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.168152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.168167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.168183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.168198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.110 [2024-10-08 18:28:43.168214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.110 [2024-10-08 18:28:43.168229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.168970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.168985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.169000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.169016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaaa90 is same with the state(6) to be set 00:21:25.111 [2024-10-08 18:28:43.170255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.111 [2024-10-08 18:28:43.170849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.111 [2024-10-08 18:28:43.170863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.170880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.170894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.170911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.170926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.170942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.170956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.170973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.170987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.171982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.171997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.172013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.172028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.112 [2024-10-08 18:28:43.172044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.112 [2024-10-08 18:28:43.172058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.172074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.172088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.172105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.172119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.172135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.172150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.172166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.172181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.172197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.172211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.172228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.172242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.172259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.172283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.172308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.172324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.172340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.172356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.172375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2147660 is same with the state(6) to be set 00:21:25.113 [2024-10-08 18:28:43.173637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.173978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.173994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.113 [2024-10-08 18:28:43.174672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.113 [2024-10-08 18:28:43.174689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.174703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.174719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.174733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.174749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.174763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.174779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.174794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.174810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.174824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.174843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.174859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.174876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.174891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.174907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.174922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.174939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.174954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.174970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.174984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.175688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.175703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2148ab0 is same with the state(6) to be set 00:21:25.114 [2024-10-08 18:28:43.176960] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:25.114 [2024-10-08 18:28:43.176990] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:25.114 [2024-10-08 18:28:43.177009] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:25.114 [2024-10-08 18:28:43.177240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.114 [2024-10-08 18:28:43.177276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c5d50 with addr=10.0.0.2, port=4420 00:21:25.114 [2024-10-08 18:28:43.177308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c5d50 is same with the state(6) to be set 00:21:25.114 [2024-10-08 18:28:43.177380] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:25.114 [2024-10-08 18:28:43.177428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c5d50 (9): Bad file descriptor 00:21:25.114 [2024-10-08 18:28:43.177796] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:25.114 [2024-10-08 18:28:43.177947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.114 [2024-10-08 18:28:43.177976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca57b0 with addr=10.0.0.2, port=4420 00:21:25.114 [2024-10-08 18:28:43.177993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca57b0 is same with the state(6) to be set 00:21:25.114 [2024-10-08 18:28:43.178079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.114 [2024-10-08 18:28:43.178106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9c2a0 with addr=10.0.0.2, port=4420 00:21:25.114 [2024-10-08 18:28:43.178123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9c2a0 is same with the state(6) to be set 00:21:25.114 [2024-10-08 18:28:43.178224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.114 [2024-10-08 18:28:43.178251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca18d0 with addr=10.0.0.2, port=4420 00:21:25.114 [2024-10-08 18:28:43.178280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca18d0 is same with the state(6) to be set 00:21:25.114 [2024-10-08 18:28:43.179432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.179458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.114 [2024-10-08 18:28:43.179482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.114 [2024-10-08 18:28:43.179498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.179980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.179995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.115 [2024-10-08 18:28:43.180853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.115 [2024-10-08 18:28:43.180868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.180884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.180898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.180913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.180927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.180944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.180958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.180974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.180988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.181508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.181523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a73b0 is same with the state(6) to be set 00:21:25.116 [2024-10-08 18:28:43.182786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.182809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.182835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.182853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.182870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.182884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.182900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.182915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.182931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.182945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.182961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.182976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.182993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.183024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.183055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.183086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.183118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.183148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.183179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.183210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.183245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.183285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.116 [2024-10-08 18:28:43.183325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.116 [2024-10-08 18:28:43.183340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.183975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.183991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.117 [2024-10-08 18:28:43.184661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.117 [2024-10-08 18:28:43.184677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.118 [2024-10-08 18:28:43.184691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.118 [2024-10-08 18:28:43.184707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.118 [2024-10-08 18:28:43.184722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.118 [2024-10-08 18:28:43.184738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.118 [2024-10-08 18:28:43.184753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.118 [2024-10-08 18:28:43.184769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.118 [2024-10-08 18:28:43.184784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.118 [2024-10-08 18:28:43.184800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.118 [2024-10-08 18:28:43.184815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.118 [2024-10-08 18:28:43.184832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.118 [2024-10-08 18:28:43.184850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:25.118 [2024-10-08 18:28:43.184864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a8950 is same with the state(6) to be set 00:21:25.118 [2024-10-08 18:28:43.186496] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:25.118 [2024-10-08 18:28:43.186544] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:25.118 [2024-10-08 18:28:43.186563] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:25.118 [2024-10-08 18:28:43.186582] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:25.118 task offset: 16384 on job bdev=Nvme6n1 fails 00:21:25.118 00:21:25.118 Latency(us) 00:21:25.118 [2024-10-08T16:28:43.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.118 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.118 Job: Nvme1n1 ended in about 0.81 seconds with error 00:21:25.118 Verification LBA range: start 0x0 length 0x400 00:21:25.118 Nvme1n1 : 0.81 162.34 10.15 78.71 0.00 262219.79 35923.44 236123.78 00:21:25.118 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.118 Job: Nvme2n1 ended in about 0.82 seconds with error 00:21:25.118 Verification LBA range: start 0x0 length 0x400 00:21:25.118 Nvme2n1 : 0.82 156.77 9.80 78.39 0.00 262726.54 35340.89 228356.55 00:21:25.118 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.118 Job: Nvme3n1 ended in about 0.82 seconds with error 00:21:25.118 Verification LBA range: start 0x0 length 0x400 00:21:25.118 Nvme3n1 : 0.82 156.13 9.76 78.07 0.00 257782.52 17282.09 264085.81 00:21:25.118 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.118 Job: Nvme4n1 ended in about 0.82 seconds with error 00:21:25.118 Verification LBA range: start 0x0 length 0x400 00:21:25.118 Nvme4n1 : 0.82 161.58 10.10 77.75 0.00 246256.42 23010.42 251658.24 00:21:25.118 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.118 Job: Nvme5n1 ended in about 0.81 seconds with error 00:21:25.118 Verification LBA range: start 0x0 length 0x400 00:21:25.118 Nvme5n1 : 0.81 158.07 9.88 79.03 0.00 242090.29 20000.62 257872.02 00:21:25.118 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.118 Job: Nvme6n1 ended in about 0.80 seconds with error 00:21:25.118 Verification LBA range: start 0x0 length 0x400 00:21:25.118 Nvme6n1 : 0.80 159.96 10.00 79.98 0.00 232830.29 32039.82 278066.82 00:21:25.118 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.118 Job: Nvme7n1 ended in about 0.83 seconds with error 00:21:25.118 Verification LBA range: start 0x0 length 0x400 00:21:25.118 Nvme7n1 : 0.83 154.41 9.65 77.20 0.00 236443.12 17087.91 256318.58 00:21:25.118 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.118 Job: Nvme8n1 ended in about 0.83 seconds with error 00:21:25.118 Verification LBA range: start 0x0 length 0x400 00:21:25.118 Nvme8n1 : 0.83 153.80 9.61 76.90 0.00 231635.88 21554.06 248551.35 00:21:25.118 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.118 Job: Nvme9n1 ended in about 0.80 seconds with error 00:21:25.118 Verification LBA range: start 0x0 length 0x400 00:21:25.118 Nvme9n1 : 0.80 159.70 9.98 79.85 0.00 215497.89 13981.01 292047.83 00:21:25.118 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:25.118 Job: Nvme10n1 ended in about 0.80 seconds with error 00:21:25.118 Verification LBA range: start 0x0 length 0x400 00:21:25.118 Nvme10n1 : 0.80 159.48 9.97 79.74 0.00 209984.98 14563.56 254765.13 00:21:25.118 [2024-10-08T16:28:43.405Z] =================================================================================================================== 00:21:25.118 [2024-10-08T16:28:43.405Z] Total : 1582.23 98.89 785.62 0.00 239810.25 13981.01 292047.83 00:21:25.118 [2024-10-08 18:28:43.212816] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:25.118 [2024-10-08 18:28:43.212908] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:25.118 [2024-10-08 18:28:43.213207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.118 [2024-10-08 18:28:43.213245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20d0410 with addr=10.0.0.2, port=4420 00:21:25.118 [2024-10-08 18:28:43.213266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20d0410 is same with the state(6) to be set 00:21:25.118 [2024-10-08 18:28:43.213299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca57b0 (9): Bad file descriptor 00:21:25.118 [2024-10-08 18:28:43.213325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9c2a0 (9): Bad file descriptor 00:21:25.118 [2024-10-08 18:28:43.213345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca18d0 (9): Bad file descriptor 00:21:25.118 [2024-10-08 18:28:43.213362] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:25.118 [2024-10-08 18:28:43.213376] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:25.118 [2024-10-08 18:28:43.213392] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:25.118 [2024-10-08 18:28:43.213598] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.118 [2024-10-08 18:28:43.213718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.118 [2024-10-08 18:28:43.213747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c9230 with addr=10.0.0.2, port=4420 00:21:25.118 [2024-10-08 18:28:43.213765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c9230 is same with the state(6) to be set 00:21:25.118 [2024-10-08 18:28:43.213852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.118 [2024-10-08 18:28:43.213879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2118a50 with addr=10.0.0.2, port=4420 00:21:25.118 [2024-10-08 18:28:43.213896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2118a50 is same with the state(6) to be set 00:21:25.118 [2024-10-08 18:28:43.213973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.118 [2024-10-08 18:28:43.214001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e66d0 with addr=10.0.0.2, port=4420 00:21:25.118 [2024-10-08 18:28:43.214017] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e66d0 is same with the state(6) to be set 00:21:25.118 [2024-10-08 18:28:43.214103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.118 [2024-10-08 18:28:43.214130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c8790 with addr=10.0.0.2, port=4420 00:21:25.118 [2024-10-08 18:28:43.214146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8790 is same with the state(6) to be set 00:21:25.118 [2024-10-08 18:28:43.214231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.118 [2024-10-08 18:28:43.214258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0e1e0 with addr=10.0.0.2, port=4420 00:21:25.118 [2024-10-08 18:28:43.214275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0e1e0 is same with the state(6) to be set 00:21:25.118 [2024-10-08 18:28:43.214300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20d0410 (9): Bad file descriptor 00:21:25.118 [2024-10-08 18:28:43.214320] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:25.118 [2024-10-08 18:28:43.214347] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:25.118 [2024-10-08 18:28:43.214363] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:25.118 [2024-10-08 18:28:43.214383] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:25.118 [2024-10-08 18:28:43.214399] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:25.118 [2024-10-08 18:28:43.214413] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:25.118 [2024-10-08 18:28:43.214430] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:25.118 [2024-10-08 18:28:43.214445] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:25.118 [2024-10-08 18:28:43.214458] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:25.118 [2024-10-08 18:28:43.214497] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:25.118 [2024-10-08 18:28:43.214520] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:25.118 [2024-10-08 18:28:43.214540] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:25.118 [2024-10-08 18:28:43.214561] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:25.118 [2024-10-08 18:28:43.215190] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.118 [2024-10-08 18:28:43.215215] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.118 [2024-10-08 18:28:43.215229] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.118 [2024-10-08 18:28:43.215251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c9230 (9): Bad file descriptor 00:21:25.118 [2024-10-08 18:28:43.215272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2118a50 (9): Bad file descriptor 00:21:25.118 [2024-10-08 18:28:43.215300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20e66d0 (9): Bad file descriptor 00:21:25.118 [2024-10-08 18:28:43.215320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c8790 (9): Bad file descriptor 00:21:25.118 [2024-10-08 18:28:43.215338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0e1e0 (9): Bad file descriptor 00:21:25.118 [2024-10-08 18:28:43.215354] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:25.119 [2024-10-08 18:28:43.215368] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:25.119 [2024-10-08 18:28:43.215381] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:25.119 [2024-10-08 18:28:43.215445] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:25.119 [2024-10-08 18:28:43.215469] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.119 [2024-10-08 18:28:43.215493] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:25.119 [2024-10-08 18:28:43.215509] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:25.119 [2024-10-08 18:28:43.215523] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:25.119 [2024-10-08 18:28:43.215540] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:25.119 [2024-10-08 18:28:43.215554] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:25.119 [2024-10-08 18:28:43.215573] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:25.119 [2024-10-08 18:28:43.215591] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:25.119 [2024-10-08 18:28:43.215606] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:25.119 [2024-10-08 18:28:43.215619] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:25.119 [2024-10-08 18:28:43.215635] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:25.119 [2024-10-08 18:28:43.215650] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:25.119 [2024-10-08 18:28:43.215663] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:25.119 [2024-10-08 18:28:43.215679] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:25.119 [2024-10-08 18:28:43.215693] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:25.119 [2024-10-08 18:28:43.215707] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:25.119 [2024-10-08 18:28:43.215761] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.119 [2024-10-08 18:28:43.215780] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.119 [2024-10-08 18:28:43.215793] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.119 [2024-10-08 18:28:43.215805] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.119 [2024-10-08 18:28:43.215816] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.119 [2024-10-08 18:28:43.215904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:25.119 [2024-10-08 18:28:43.215931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c5d50 with addr=10.0.0.2, port=4420 00:21:25.119 [2024-10-08 18:28:43.215947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c5d50 is same with the state(6) to be set 00:21:25.119 [2024-10-08 18:28:43.215995] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c5d50 (9): Bad file descriptor 00:21:25.119 [2024-10-08 18:28:43.216040] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:25.119 [2024-10-08 18:28:43.216058] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:25.119 [2024-10-08 18:28:43.216072] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:25.119 [2024-10-08 18:28:43.216110] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.686 18:28:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1297935 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1297935 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1297935 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:26.620 rmmod nvme_tcp 00:21:26.620 rmmod nvme_fabrics 00:21:26.620 rmmod nvme_keyring 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1297761 ']' 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1297761 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1297761 ']' 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1297761 00:21:26.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1297761) - No such process 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1297761 is not found' 00:21:26.620 Process with pid 1297761 is not found 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:26.620 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:26.621 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.621 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.621 18:28:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:29.159 00:21:29.159 real 0m7.392s 00:21:29.159 user 0m17.644s 00:21:29.159 sys 0m1.501s 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:29.159 ************************************ 00:21:29.159 END TEST nvmf_shutdown_tc3 00:21:29.159 ************************************ 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:29.159 ************************************ 00:21:29.159 START TEST nvmf_shutdown_tc4 00:21:29.159 ************************************ 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:29.159 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:29.159 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:29.159 Found net devices under 0000:09:00.0: cvl_0_0 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:29.159 Found net devices under 0000:09:00.1: cvl_0_1 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:29.159 18:28:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:29.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:29.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:21:29.159 00:21:29.159 --- 10.0.0.2 ping statistics --- 00:21:29.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.159 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:29.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:29.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:21:29.159 00:21:29.159 --- 10.0.0.1 ping statistics --- 00:21:29.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:29.159 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:29.159 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:29.160 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:29.160 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1298836 00:21:29.160 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1298836 00:21:29.160 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1298836 ']' 00:21:29.160 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.160 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:29.160 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.160 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:29.160 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:29.160 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:29.160 [2024-10-08 18:28:47.174460] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:29.160 [2024-10-08 18:28:47.174551] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.160 [2024-10-08 18:28:47.240697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.160 [2024-10-08 18:28:47.343582] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.160 [2024-10-08 18:28:47.343640] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.160 [2024-10-08 18:28:47.343653] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.160 [2024-10-08 18:28:47.343674] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.160 [2024-10-08 18:28:47.343684] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.160 [2024-10-08 18:28:47.345203] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.160 [2024-10-08 18:28:47.345268] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.160 [2024-10-08 18:28:47.345331] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:21:29.160 [2024-10-08 18:28:47.345335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:29.418 [2024-10-08 18:28:47.504417] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.418 18:28:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:29.418 Malloc1 00:21:29.418 [2024-10-08 18:28:47.594588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.418 Malloc2 00:21:29.418 Malloc3 00:21:29.677 Malloc4 00:21:29.677 Malloc5 00:21:29.677 Malloc6 00:21:29.677 Malloc7 00:21:29.677 Malloc8 00:21:29.677 Malloc9 00:21:29.934 Malloc10 00:21:29.934 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.934 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:29.934 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:29.934 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:29.934 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1299019 00:21:29.934 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:29.934 18:28:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:29.934 [2024-10-08 18:28:48.098046] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1298836 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1298836 ']' 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1298836 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1298836 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1298836' 00:21:35.208 killing process with pid 1298836 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1298836 00:21:35.208 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1298836 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 [2024-10-08 18:28:53.099209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.208 starting I/O failed: -6 00:21:35.208 starting I/O failed: -6 00:21:35.208 starting I/O failed: -6 00:21:35.208 NVMe io qpair process completion error 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 [2024-10-08 18:28:53.100361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cef1e0 is same with the state(6) to be set 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 [2024-10-08 18:28:53.100413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cef1e0 is same with the state(6) to be set 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 [2024-10-08 18:28:53.101079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.208 [2024-10-08 18:28:53.101133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 [2024-10-08 18:28:53.101188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 [2024-10-08 18:28:53.101205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 [2024-10-08 18:28:53.101218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 [2024-10-08 18:28:53.101230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 starting I/O failed: -6 00:21:35.208 [2024-10-08 18:28:53.101243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 [2024-10-08 18:28:53.101257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 [2024-10-08 18:28:53.101280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 [2024-10-08 18:28:53.101312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with starting I/O failed: -6 00:21:35.208 the state(6) to be set 00:21:35.208 [2024-10-08 18:28:53.101328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with Write completed with error (sct=0, sc=8) 00:21:35.208 the state(6) to be set 00:21:35.208 starting I/O failed: -6 00:21:35.208 [2024-10-08 18:28:53.101342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 [2024-10-08 18:28:53.101355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 [2024-10-08 18:28:53.101367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 [2024-10-08 18:28:53.101380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 [2024-10-08 18:28:53.101392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 starting I/O failed: -6 00:21:35.208 [2024-10-08 18:28:53.101404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f670 is same with the state(6) to be set 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 starting I/O failed: -6 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.208 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 [2024-10-08 18:28:53.101809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fb40 is same with the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 [2024-10-08 18:28:53.101850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fb40 is same with the state(6) to be set 00:21:35.209 starting I/O failed: -6 00:21:35.209 [2024-10-08 18:28:53.101875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fb40 is same with the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 [2024-10-08 18:28:53.101893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fb40 is same with the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 [2024-10-08 18:28:53.101908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fb40 is same with the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 [2024-10-08 18:28:53.101921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fb40 is same with the state(6) to be set 00:21:35.209 [2024-10-08 18:28:53.101934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fb40 is same with Write completed with error (sct=0, sc=8) 00:21:35.209 the state(6) to be set 00:21:35.209 starting I/O failed: -6 00:21:35.209 [2024-10-08 18:28:53.101951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2fb40 is same with the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 [2024-10-08 18:28:53.102108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 [2024-10-08 18:28:53.102366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2ecd0 is same with starting I/O failed: -6 00:21:35.209 the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 [2024-10-08 18:28:53.102396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2ecd0 is same with starting I/O failed: -6 00:21:35.209 the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 [2024-10-08 18:28:53.102412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2ecd0 is same with the state(6) to be set 00:21:35.209 [2024-10-08 18:28:53.102427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2ecd0 is same with the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 [2024-10-08 18:28:53.102439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2ecd0 is same with the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 [2024-10-08 18:28:53.102453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2ecd0 is same with the state(6) to be set 00:21:35.209 starting I/O failed: -6 00:21:35.209 [2024-10-08 18:28:53.102467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2ecd0 is same with the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 [2024-10-08 18:28:53.102480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2ecd0 is same with starting I/O failed: -6 00:21:35.209 the state(6) to be set 00:21:35.209 [2024-10-08 18:28:53.102493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2ecd0 is same with the state(6) to be set 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 [2024-10-08 18:28:53.103388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.209 starting I/O failed: -6 00:21:35.209 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 [2024-10-08 18:28:53.104932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.210 NVMe io qpair process completion error 00:21:35.210 [2024-10-08 18:28:53.110309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.110621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5f820 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.111447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47350 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.111481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47350 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.111500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47350 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.111516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47350 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.111531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47350 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.111543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47350 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.111555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47350 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.111577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47350 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.111589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47350 is same with the state(6) to be set 00:21:35.210 [2024-10-08 18:28:53.111601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f47350 is same with the state(6) to be set 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 starting I/O failed: -6 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.210 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.113395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.114335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.115028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacdf0 is same with Write completed with error (sct=0, sc=8) 00:21:35.211 the state(6) to be set 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.115063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacdf0 is same with the state(6) to be set 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 [2024-10-08 18:28:53.115080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacdf0 is same with starting I/O failed: -6 00:21:35.211 the state(6) to be set 00:21:35.211 [2024-10-08 18:28:53.115093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacdf0 is same with Write completed with error (sct=0, sc=8) 00:21:35.211 the state(6) to be set 00:21:35.211 [2024-10-08 18:28:53.115107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacdf0 is same with the state(6) to be set 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 [2024-10-08 18:28:53.115119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacdf0 is same with starting I/O failed: -6 00:21:35.211 the state(6) to be set 00:21:35.211 [2024-10-08 18:28:53.115133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacdf0 is same with Write completed with error (sct=0, sc=8) 00:21:35.211 the state(6) to be set 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.115146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacdf0 is same with the state(6) to be set 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 [2024-10-08 18:28:53.115158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eacdf0 is same with the state(6) to be set 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 [2024-10-08 18:28:53.115540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.115689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ee80 is same with Write completed with error (sct=0, sc=8) 00:21:35.211 the state(6) to be set 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.115720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ee80 is same with Write completed with error (sct=0, sc=8) 00:21:35.211 the state(6) to be set 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.115740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ee80 is same with the state(6) to be set 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.115754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ee80 is same with the state(6) to be set 00:21:35.211 [2024-10-08 18:28:53.115767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ee80 is same with Write completed with error (sct=0, sc=8) 00:21:35.211 the state(6) to be set 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.115783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ee80 is same with the state(6) to be set 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 [2024-10-08 18:28:53.115798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5ee80 is same with the state(6) to be set 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.211 Write completed with error (sct=0, sc=8) 00:21:35.211 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 [2024-10-08 18:28:53.117458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.212 NVMe io qpair process completion error 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 [2024-10-08 18:28:53.118783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 starting I/O failed: -6 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.212 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 [2024-10-08 18:28:53.119764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 [2024-10-08 18:28:53.120892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 [2024-10-08 18:28:53.122837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.213 NVMe io qpair process completion error 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 starting I/O failed: -6 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.213 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 [2024-10-08 18:28:53.123965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 [2024-10-08 18:28:53.124965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.214 starting I/O failed: -6 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 [2024-10-08 18:28:53.126367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.214 Write completed with error (sct=0, sc=8) 00:21:35.214 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 [2024-10-08 18:28:53.128165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.215 NVMe io qpair process completion error 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 [2024-10-08 18:28:53.129449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.215 starting I/O failed: -6 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 [2024-10-08 18:28:53.130594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:35.215 starting I/O failed: -6 00:21:35.215 starting I/O failed: -6 00:21:35.215 starting I/O failed: -6 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.215 Write completed with error (sct=0, sc=8) 00:21:35.215 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 [2024-10-08 18:28:53.132011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 [2024-10-08 18:28:53.134950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.216 NVMe io qpair process completion error 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 [2024-10-08 18:28:53.136200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.216 starting I/O failed: -6 00:21:35.216 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 [2024-10-08 18:28:53.137180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 [2024-10-08 18:28:53.138331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.217 starting I/O failed: -6 00:21:35.217 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 [2024-10-08 18:28:53.141383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.218 NVMe io qpair process completion error 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 [2024-10-08 18:28:53.142876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 [2024-10-08 18:28:53.143874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 starting I/O failed: -6 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.218 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 [2024-10-08 18:28:53.145045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 starting I/O failed: -6 00:21:35.219 [2024-10-08 18:28:53.147391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.219 NVMe io qpair process completion error 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.219 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 [2024-10-08 18:28:53.151153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 [2024-10-08 18:28:53.152092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.220 Write completed with error (sct=0, sc=8) 00:21:35.220 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 [2024-10-08 18:28:53.153323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 starting I/O failed: -6 00:21:35.221 [2024-10-08 18:28:53.155442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:21:35.221 NVMe io qpair process completion error 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.221 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 Write completed with error (sct=0, sc=8) 00:21:35.222 [2024-10-08 18:28:53.159810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:21:35.222 NVMe io qpair process completion error 00:21:35.222 Initializing NVMe Controllers 00:21:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:35.222 Controller IO queue size 128, less than required. 00:21:35.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.222 Controller IO queue size 128, less than required. 00:21:35.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:35.222 Controller IO queue size 128, less than required. 00:21:35.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:35.222 Controller IO queue size 128, less than required. 00:21:35.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:35.222 Controller IO queue size 128, less than required. 00:21:35.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:35.222 Controller IO queue size 128, less than required. 00:21:35.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:35.222 Controller IO queue size 128, less than required. 00:21:35.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:35.222 Controller IO queue size 128, less than required. 00:21:35.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:35.222 Controller IO queue size 128, less than required. 00:21:35.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:35.222 Controller IO queue size 128, less than required. 00:21:35.222 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:35.222 Initialization complete. Launching workers. 00:21:35.222 ======================================================== 00:21:35.222 Latency(us) 00:21:35.222 Device Information : IOPS MiB/s Average min max 00:21:35.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1774.87 76.26 72050.58 512.78 135085.44 00:21:35.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1810.71 77.80 69941.80 880.69 123113.66 00:21:35.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1844.22 79.24 69349.31 895.36 120450.31 00:21:35.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1845.92 79.32 69243.36 550.97 118025.38 00:21:35.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1835.95 78.89 68990.66 900.15 117530.39 00:21:35.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1861.40 79.98 68070.09 798.99 115637.34 00:21:35.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1874.76 80.56 67605.34 836.34 115943.18 00:21:35.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1806.47 77.62 70187.05 842.01 124536.20 00:21:35.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1778.48 76.42 71330.38 904.19 115807.88 00:21:35.222 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1783.57 76.64 71169.96 909.58 132234.70 00:21:35.222 ======================================================== 00:21:35.222 Total : 18216.34 782.73 69769.06 512.78 135085.44 00:21:35.222 00:21:35.222 [2024-10-08 18:28:53.164930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaf9d0 is same with the state(6) to be set 00:21:35.222 [2024-10-08 18:28:53.165028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbaf7f0 is same with the state(6) to be set 00:21:35.222 [2024-10-08 18:28:53.165088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb4370 is same with the state(6) to be set 00:21:35.222 [2024-10-08 18:28:53.165155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb3d10 is same with the state(6) to be set 00:21:35.222 [2024-10-08 18:28:53.165214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadab0 is same with the state(6) to be set 00:21:35.222 [2024-10-08 18:28:53.165270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbad780 is same with the state(6) to be set 00:21:35.222 [2024-10-08 18:28:53.165340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb46a0 is same with the state(6) to be set 00:21:35.222 [2024-10-08 18:28:53.165398] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbadde0 is same with the state(6) to be set 00:21:35.222 [2024-10-08 18:28:53.165462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbafbb0 is same with the state(6) to be set 00:21:35.222 [2024-10-08 18:28:53.165520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb4040 is same with the state(6) to be set 00:21:35.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:35.517 18:28:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1299019 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1299019 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1299019 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:36.479 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:36.480 rmmod nvme_tcp 00:21:36.480 rmmod nvme_fabrics 00:21:36.480 rmmod nvme_keyring 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1298836 ']' 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1298836 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1298836 ']' 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1298836 00:21:36.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1298836) - No such process 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1298836 is not found' 00:21:36.480 Process with pid 1298836 is not found 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.480 18:28:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.017 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:39.017 00:21:39.017 real 0m9.829s 00:21:39.017 user 0m23.750s 00:21:39.017 sys 0m5.724s 00:21:39.017 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.017 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:39.017 ************************************ 00:21:39.017 END TEST nvmf_shutdown_tc4 00:21:39.017 ************************************ 00:21:39.017 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:39.017 00:21:39.017 real 0m37.614s 00:21:39.017 user 1m40.982s 00:21:39.017 sys 0m12.161s 00:21:39.017 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.017 18:28:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:39.017 ************************************ 00:21:39.017 END TEST nvmf_shutdown 00:21:39.017 ************************************ 00:21:39.017 18:28:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:39.017 00:21:39.017 real 11m34.685s 00:21:39.017 user 27m26.309s 00:21:39.017 sys 2m46.627s 00:21:39.017 18:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.017 18:28:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:39.017 ************************************ 00:21:39.017 END TEST nvmf_target_extra 00:21:39.017 ************************************ 00:21:39.017 18:28:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:39.017 18:28:56 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:39.017 18:28:56 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:39.017 18:28:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:39.017 ************************************ 00:21:39.017 START TEST nvmf_host 00:21:39.017 ************************************ 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:39.017 * Looking for test storage... 00:21:39.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:39.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.017 --rc genhtml_branch_coverage=1 00:21:39.017 --rc genhtml_function_coverage=1 00:21:39.017 --rc genhtml_legend=1 00:21:39.017 --rc geninfo_all_blocks=1 00:21:39.017 --rc geninfo_unexecuted_blocks=1 00:21:39.017 00:21:39.017 ' 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:39.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.017 --rc genhtml_branch_coverage=1 00:21:39.017 --rc genhtml_function_coverage=1 00:21:39.017 --rc genhtml_legend=1 00:21:39.017 --rc geninfo_all_blocks=1 00:21:39.017 --rc geninfo_unexecuted_blocks=1 00:21:39.017 00:21:39.017 ' 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:39.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.017 --rc genhtml_branch_coverage=1 00:21:39.017 --rc genhtml_function_coverage=1 00:21:39.017 --rc genhtml_legend=1 00:21:39.017 --rc geninfo_all_blocks=1 00:21:39.017 --rc geninfo_unexecuted_blocks=1 00:21:39.017 00:21:39.017 ' 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:39.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.017 --rc genhtml_branch_coverage=1 00:21:39.017 --rc genhtml_function_coverage=1 00:21:39.017 --rc genhtml_legend=1 00:21:39.017 --rc geninfo_all_blocks=1 00:21:39.017 --rc geninfo_unexecuted_blocks=1 00:21:39.017 00:21:39.017 ' 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.017 18:28:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:39.018 18:28:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.018 ************************************ 00:21:39.018 START TEST nvmf_multicontroller 00:21:39.018 ************************************ 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:39.018 * Looking for test storage... 00:21:39.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:39.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.018 --rc genhtml_branch_coverage=1 00:21:39.018 --rc genhtml_function_coverage=1 00:21:39.018 --rc genhtml_legend=1 00:21:39.018 --rc geninfo_all_blocks=1 00:21:39.018 --rc geninfo_unexecuted_blocks=1 00:21:39.018 00:21:39.018 ' 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:39.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.018 --rc genhtml_branch_coverage=1 00:21:39.018 --rc genhtml_function_coverage=1 00:21:39.018 --rc genhtml_legend=1 00:21:39.018 --rc geninfo_all_blocks=1 00:21:39.018 --rc geninfo_unexecuted_blocks=1 00:21:39.018 00:21:39.018 ' 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:39.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.018 --rc genhtml_branch_coverage=1 00:21:39.018 --rc genhtml_function_coverage=1 00:21:39.018 --rc genhtml_legend=1 00:21:39.018 --rc geninfo_all_blocks=1 00:21:39.018 --rc geninfo_unexecuted_blocks=1 00:21:39.018 00:21:39.018 ' 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:39.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.018 --rc genhtml_branch_coverage=1 00:21:39.018 --rc genhtml_function_coverage=1 00:21:39.018 --rc genhtml_legend=1 00:21:39.018 --rc geninfo_all_blocks=1 00:21:39.018 --rc geninfo_unexecuted_blocks=1 00:21:39.018 00:21:39.018 ' 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.018 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:39.019 18:28:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:41.548 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:41.548 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:41.548 Found net devices under 0000:09:00.0: cvl_0_0 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:41.548 Found net devices under 0000:09:00.1: cvl_0_1 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.548 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:21:41.548 00:21:41.548 --- 10.0.0.2 ping statistics --- 00:21:41.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.549 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:21:41.549 00:21:41.549 --- 10.0.0.1 ping statistics --- 00:21:41.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.549 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1301822 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1301822 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1301822 ']' 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.549 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.549 [2024-10-08 18:28:59.600840] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:41.549 [2024-10-08 18:28:59.600965] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.549 [2024-10-08 18:28:59.666333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.549 [2024-10-08 18:28:59.771754] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.549 [2024-10-08 18:28:59.771809] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.549 [2024-10-08 18:28:59.771833] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.549 [2024-10-08 18:28:59.771844] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.549 [2024-10-08 18:28:59.771853] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.549 [2024-10-08 18:28:59.772686] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.549 [2024-10-08 18:28:59.772742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.549 [2024-10-08 18:28:59.772746] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 [2024-10-08 18:28:59.907925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 Malloc0 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 [2024-10-08 18:28:59.970450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 [2024-10-08 18:28:59.978340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:28:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 Malloc1 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1301849 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1301849 /var/tmp/bdevperf.sock 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1301849 ']' 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.807 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.398 NVMe0n1 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.398 1 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.398 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.398 request: 00:21:42.398 { 00:21:42.398 "name": "NVMe0", 00:21:42.398 "trtype": "tcp", 00:21:42.398 "traddr": "10.0.0.2", 00:21:42.398 "adrfam": "ipv4", 00:21:42.398 "trsvcid": "4420", 00:21:42.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.398 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:42.398 "hostaddr": "10.0.0.1", 00:21:42.398 "prchk_reftag": false, 00:21:42.398 "prchk_guard": false, 00:21:42.398 "hdgst": false, 00:21:42.398 "ddgst": false, 00:21:42.398 "allow_unrecognized_csi": false, 00:21:42.398 "method": "bdev_nvme_attach_controller", 00:21:42.398 "req_id": 1 00:21:42.398 } 00:21:42.398 Got JSON-RPC error response 00:21:42.398 response: 00:21:42.399 { 00:21:42.399 "code": -114, 00:21:42.399 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:42.399 } 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.399 request: 00:21:42.399 { 00:21:42.399 "name": "NVMe0", 00:21:42.399 "trtype": "tcp", 00:21:42.399 "traddr": "10.0.0.2", 00:21:42.399 "adrfam": "ipv4", 00:21:42.399 "trsvcid": "4420", 00:21:42.399 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.399 "hostaddr": "10.0.0.1", 00:21:42.399 "prchk_reftag": false, 00:21:42.399 "prchk_guard": false, 00:21:42.399 "hdgst": false, 00:21:42.399 "ddgst": false, 00:21:42.399 "allow_unrecognized_csi": false, 00:21:42.399 "method": "bdev_nvme_attach_controller", 00:21:42.399 "req_id": 1 00:21:42.399 } 00:21:42.399 Got JSON-RPC error response 00:21:42.399 response: 00:21:42.399 { 00:21:42.399 "code": -114, 00:21:42.399 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:42.399 } 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.399 request: 00:21:42.399 { 00:21:42.399 "name": "NVMe0", 00:21:42.399 "trtype": "tcp", 00:21:42.399 "traddr": "10.0.0.2", 00:21:42.399 "adrfam": "ipv4", 00:21:42.399 "trsvcid": "4420", 00:21:42.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.399 "hostaddr": "10.0.0.1", 00:21:42.399 "prchk_reftag": false, 00:21:42.399 "prchk_guard": false, 00:21:42.399 "hdgst": false, 00:21:42.399 "ddgst": false, 00:21:42.399 "multipath": "disable", 00:21:42.399 "allow_unrecognized_csi": false, 00:21:42.399 "method": "bdev_nvme_attach_controller", 00:21:42.399 "req_id": 1 00:21:42.399 } 00:21:42.399 Got JSON-RPC error response 00:21:42.399 response: 00:21:42.399 { 00:21:42.399 "code": -114, 00:21:42.399 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:42.399 } 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.399 request: 00:21:42.399 { 00:21:42.399 "name": "NVMe0", 00:21:42.399 "trtype": "tcp", 00:21:42.399 "traddr": "10.0.0.2", 00:21:42.399 "adrfam": "ipv4", 00:21:42.399 "trsvcid": "4420", 00:21:42.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.399 "hostaddr": "10.0.0.1", 00:21:42.399 "prchk_reftag": false, 00:21:42.399 "prchk_guard": false, 00:21:42.399 "hdgst": false, 00:21:42.399 "ddgst": false, 00:21:42.399 "multipath": "failover", 00:21:42.399 "allow_unrecognized_csi": false, 00:21:42.399 "method": "bdev_nvme_attach_controller", 00:21:42.399 "req_id": 1 00:21:42.399 } 00:21:42.399 Got JSON-RPC error response 00:21:42.399 response: 00:21:42.399 { 00:21:42.399 "code": -114, 00:21:42.399 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:42.399 } 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.399 NVMe0n1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.399 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.658 00:21:42.658 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.658 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:42.658 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:42.658 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.658 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.658 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.658 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:42.658 18:29:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:44.031 { 00:21:44.031 "results": [ 00:21:44.031 { 00:21:44.031 "job": "NVMe0n1", 00:21:44.031 "core_mask": "0x1", 00:21:44.031 "workload": "write", 00:21:44.031 "status": "finished", 00:21:44.031 "queue_depth": 128, 00:21:44.031 "io_size": 4096, 00:21:44.031 "runtime": 1.008405, 00:21:44.031 "iops": 17982.854111195404, 00:21:44.031 "mibps": 70.24552387185705, 00:21:44.031 "io_failed": 0, 00:21:44.031 "io_timeout": 0, 00:21:44.031 "avg_latency_us": 7102.718217059013, 00:21:44.031 "min_latency_us": 5582.696296296296, 00:21:44.031 "max_latency_us": 17087.905185185184 00:21:44.031 } 00:21:44.031 ], 00:21:44.031 "core_count": 1 00:21:44.031 } 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1301849 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1301849 ']' 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1301849 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1301849 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1301849' 00:21:44.031 killing process with pid 1301849 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1301849 00:21:44.031 18:29:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1301849 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:21:44.031 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:21:44.031 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:44.031 [2024-10-08 18:29:00.081513] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:44.031 [2024-10-08 18:29:00.081629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301849 ] 00:21:44.031 [2024-10-08 18:29:00.143899] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.031 [2024-10-08 18:29:00.258336] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.031 [2024-10-08 18:29:00.780486] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name aa6b5ae5-720b-44de-920f-59381320a419 already exists 00:21:44.031 [2024-10-08 18:29:00.780526] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:aa6b5ae5-720b-44de-920f-59381320a419 alias for bdev NVMe1n1 00:21:44.031 [2024-10-08 18:29:00.780557] bdev_nvme.c:4559:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:44.031 Running I/O for 1 seconds... 00:21:44.031 17908.00 IOPS, 69.95 MiB/s 00:21:44.031 Latency(us) 00:21:44.031 [2024-10-08T16:29:02.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.032 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:44.032 NVMe0n1 : 1.01 17982.85 70.25 0.00 0.00 7102.72 5582.70 17087.91 00:21:44.032 [2024-10-08T16:29:02.319Z] =================================================================================================================== 00:21:44.032 [2024-10-08T16:29:02.319Z] Total : 17982.85 70.25 0.00 0.00 7102.72 5582.70 17087.91 00:21:44.032 Received shutdown signal, test time was about 1.000000 seconds 00:21:44.032 00:21:44.032 Latency(us) 00:21:44.032 [2024-10-08T16:29:02.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.032 [2024-10-08T16:29:02.319Z] =================================================================================================================== 00:21:44.032 [2024-10-08T16:29:02.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.032 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:44.032 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:44.032 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:44.032 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:44.032 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:44.032 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:44.032 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:44.032 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:44.032 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:44.032 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:44.032 rmmod nvme_tcp 00:21:44.032 rmmod nvme_fabrics 00:21:44.032 rmmod nvme_keyring 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1301822 ']' 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1301822 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1301822 ']' 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1301822 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1301822 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1301822' 00:21:44.289 killing process with pid 1301822 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1301822 00:21:44.289 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1301822 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.547 18:29:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:47.081 00:21:47.081 real 0m7.720s 00:21:47.081 user 0m11.730s 00:21:47.081 sys 0m2.512s 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:47.081 ************************************ 00:21:47.081 END TEST nvmf_multicontroller 00:21:47.081 ************************************ 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.081 ************************************ 00:21:47.081 START TEST nvmf_aer 00:21:47.081 ************************************ 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:47.081 * Looking for test storage... 00:21:47.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:47.081 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:47.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.082 --rc genhtml_branch_coverage=1 00:21:47.082 --rc genhtml_function_coverage=1 00:21:47.082 --rc genhtml_legend=1 00:21:47.082 --rc geninfo_all_blocks=1 00:21:47.082 --rc geninfo_unexecuted_blocks=1 00:21:47.082 00:21:47.082 ' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:47.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.082 --rc genhtml_branch_coverage=1 00:21:47.082 --rc genhtml_function_coverage=1 00:21:47.082 --rc genhtml_legend=1 00:21:47.082 --rc geninfo_all_blocks=1 00:21:47.082 --rc geninfo_unexecuted_blocks=1 00:21:47.082 00:21:47.082 ' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:47.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.082 --rc genhtml_branch_coverage=1 00:21:47.082 --rc genhtml_function_coverage=1 00:21:47.082 --rc genhtml_legend=1 00:21:47.082 --rc geninfo_all_blocks=1 00:21:47.082 --rc geninfo_unexecuted_blocks=1 00:21:47.082 00:21:47.082 ' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:47.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.082 --rc genhtml_branch_coverage=1 00:21:47.082 --rc genhtml_function_coverage=1 00:21:47.082 --rc genhtml_legend=1 00:21:47.082 --rc geninfo_all_blocks=1 00:21:47.082 --rc geninfo_unexecuted_blocks=1 00:21:47.082 00:21:47.082 ' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:47.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:47.082 18:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.984 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:48.985 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:48.985 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:48.985 Found net devices under 0000:09:00.0: cvl_0_0 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:48.985 Found net devices under 0000:09:00.1: cvl_0_1 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:21:48.985 00:21:48.985 --- 10.0.0.2 ping statistics --- 00:21:48.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.985 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:21:48.985 00:21:48.985 --- 10.0.0.1 ping statistics --- 00:21:48.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.985 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1304181 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1304181 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1304181 ']' 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.985 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.243 [2024-10-08 18:29:07.314027] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:49.243 [2024-10-08 18:29:07.314114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.243 [2024-10-08 18:29:07.379025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.243 [2024-10-08 18:29:07.490030] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.243 [2024-10-08 18:29:07.490115] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.243 [2024-10-08 18:29:07.490129] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.243 [2024-10-08 18:29:07.490140] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.243 [2024-10-08 18:29:07.490149] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.243 [2024-10-08 18:29:07.491888] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.244 [2024-10-08 18:29:07.491951] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.244 [2024-10-08 18:29:07.492018] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.244 [2024-10-08 18:29:07.492021] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 [2024-10-08 18:29:07.655984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 Malloc0 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 [2024-10-08 18:29:07.709706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:49.501 [ 00:21:49.501 { 00:21:49.501 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:49.501 "subtype": "Discovery", 00:21:49.501 "listen_addresses": [], 00:21:49.501 "allow_any_host": true, 00:21:49.501 "hosts": [] 00:21:49.501 }, 00:21:49.501 { 00:21:49.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.501 "subtype": "NVMe", 00:21:49.501 "listen_addresses": [ 00:21:49.501 { 00:21:49.501 "trtype": "TCP", 00:21:49.501 "adrfam": "IPv4", 00:21:49.501 "traddr": "10.0.0.2", 00:21:49.501 "trsvcid": "4420" 00:21:49.501 } 00:21:49.501 ], 00:21:49.501 "allow_any_host": true, 00:21:49.501 "hosts": [], 00:21:49.501 "serial_number": "SPDK00000000000001", 00:21:49.501 "model_number": "SPDK bdev Controller", 00:21:49.501 "max_namespaces": 2, 00:21:49.501 "min_cntlid": 1, 00:21:49.501 "max_cntlid": 65519, 00:21:49.501 "namespaces": [ 00:21:49.501 { 00:21:49.501 "nsid": 1, 00:21:49.501 "bdev_name": "Malloc0", 00:21:49.501 "name": "Malloc0", 00:21:49.501 "nguid": "1618938DA510418ABC27AFE523FACCCC", 00:21:49.501 "uuid": "1618938d-a510-418a-bc27-afe523facccc" 00:21:49.501 } 00:21:49.501 ] 00:21:49.501 } 00:21:49.501 ] 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1304216 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:49.501 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:49.759 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:49.759 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:49.759 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:49.759 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:49.759 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:49.759 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:21:49.759 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:21:49.759 18:29:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:49.759 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:49.759 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:49.759 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:49.759 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:49.759 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.759 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.017 Malloc1 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.017 [ 00:21:50.017 { 00:21:50.017 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:50.017 "subtype": "Discovery", 00:21:50.017 "listen_addresses": [], 00:21:50.017 "allow_any_host": true, 00:21:50.017 "hosts": [] 00:21:50.017 }, 00:21:50.017 { 00:21:50.017 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.017 "subtype": "NVMe", 00:21:50.017 "listen_addresses": [ 00:21:50.017 { 00:21:50.017 "trtype": "TCP", 00:21:50.017 "adrfam": "IPv4", 00:21:50.017 "traddr": "10.0.0.2", 00:21:50.017 "trsvcid": "4420" 00:21:50.017 } 00:21:50.017 ], 00:21:50.017 "allow_any_host": true, 00:21:50.017 "hosts": [], 00:21:50.017 "serial_number": "SPDK00000000000001", 00:21:50.017 "model_number": "SPDK bdev Controller", 00:21:50.017 "max_namespaces": 2, 00:21:50.017 "min_cntlid": 1, 00:21:50.017 "max_cntlid": 65519, 00:21:50.017 "namespaces": [ 00:21:50.017 { 00:21:50.017 "nsid": 1, 00:21:50.017 "bdev_name": "Malloc0", 00:21:50.017 "name": "Malloc0", 00:21:50.017 "nguid": "1618938DA510418ABC27AFE523FACCCC", 00:21:50.017 "uuid": "1618938d-a510-418a-bc27-afe523facccc" 00:21:50.017 }, 00:21:50.017 { 00:21:50.017 "nsid": 2, 00:21:50.017 "bdev_name": "Malloc1", 00:21:50.017 "name": "Malloc1", 00:21:50.017 "nguid": "56DD4EC8CBE7409B82DEC2ED70D3DC44", 00:21:50.017 "uuid": "56dd4ec8-cbe7-409b-82de-c2ed70d3dc44" 00:21:50.017 } 00:21:50.017 ] 00:21:50.017 } 00:21:50.017 ] 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1304216 00:21:50.017 Asynchronous Event Request test 00:21:50.017 Attaching to 10.0.0.2 00:21:50.017 Attached to 10.0.0.2 00:21:50.017 Registering asynchronous event callbacks... 00:21:50.017 Starting namespace attribute notice tests for all controllers... 00:21:50.017 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:50.017 aer_cb - Changed Namespace 00:21:50.017 Cleaning up... 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.017 rmmod nvme_tcp 00:21:50.017 rmmod nvme_fabrics 00:21:50.017 rmmod nvme_keyring 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1304181 ']' 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1304181 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1304181 ']' 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1304181 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1304181 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1304181' 00:21:50.017 killing process with pid 1304181 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1304181 00:21:50.017 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1304181 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.275 18:29:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:52.812 00:21:52.812 real 0m5.802s 00:21:52.812 user 0m4.910s 00:21:52.812 sys 0m2.119s 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.812 ************************************ 00:21:52.812 END TEST nvmf_aer 00:21:52.812 ************************************ 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.812 ************************************ 00:21:52.812 START TEST nvmf_async_init 00:21:52.812 ************************************ 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:52.812 * Looking for test storage... 00:21:52.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:52.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.812 --rc genhtml_branch_coverage=1 00:21:52.812 --rc genhtml_function_coverage=1 00:21:52.812 --rc genhtml_legend=1 00:21:52.812 --rc geninfo_all_blocks=1 00:21:52.812 --rc geninfo_unexecuted_blocks=1 00:21:52.812 00:21:52.812 ' 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:52.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.812 --rc genhtml_branch_coverage=1 00:21:52.812 --rc genhtml_function_coverage=1 00:21:52.812 --rc genhtml_legend=1 00:21:52.812 --rc geninfo_all_blocks=1 00:21:52.812 --rc geninfo_unexecuted_blocks=1 00:21:52.812 00:21:52.812 ' 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:52.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.812 --rc genhtml_branch_coverage=1 00:21:52.812 --rc genhtml_function_coverage=1 00:21:52.812 --rc genhtml_legend=1 00:21:52.812 --rc geninfo_all_blocks=1 00:21:52.812 --rc geninfo_unexecuted_blocks=1 00:21:52.812 00:21:52.812 ' 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:52.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:52.812 --rc genhtml_branch_coverage=1 00:21:52.812 --rc genhtml_function_coverage=1 00:21:52.812 --rc genhtml_legend=1 00:21:52.812 --rc geninfo_all_blocks=1 00:21:52.812 --rc geninfo_unexecuted_blocks=1 00:21:52.812 00:21:52.812 ' 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.812 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=09a0e36c4d194a6bb0799af9a939459d 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.813 18:29:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:54.716 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:54.716 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:54.716 Found net devices under 0000:09:00.0: cvl_0_0 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:54.716 Found net devices under 0000:09:00.1: cvl_0_1 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:54.716 18:29:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:54.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:21:54.975 00:21:54.975 --- 10.0.0.2 ping statistics --- 00:21:54.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.975 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:21:54.975 00:21:54.975 --- 10.0.0.1 ping statistics --- 00:21:54.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.975 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1306277 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1306277 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1306277 ']' 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.975 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:54.975 [2024-10-08 18:29:13.126108] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:21:54.975 [2024-10-08 18:29:13.126199] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.975 [2024-10-08 18:29:13.192913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.233 [2024-10-08 18:29:13.304285] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.233 [2024-10-08 18:29:13.304351] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.233 [2024-10-08 18:29:13.304364] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.233 [2024-10-08 18:29:13.304385] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.233 [2024-10-08 18:29:13.304395] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.233 [2024-10-08 18:29:13.304962] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.233 [2024-10-08 18:29:13.455278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.233 null0 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 09a0e36c4d194a6bb0799af9a939459d 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.233 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:55.234 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.234 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.234 [2024-10-08 18:29:13.495605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.234 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.234 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:55.234 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.234 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.491 nvme0n1 00:21:55.491 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.491 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:55.491 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.491 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.491 [ 00:21:55.491 { 00:21:55.491 "name": "nvme0n1", 00:21:55.491 "aliases": [ 00:21:55.491 "09a0e36c-4d19-4a6b-b079-9af9a939459d" 00:21:55.491 ], 00:21:55.491 "product_name": "NVMe disk", 00:21:55.491 "block_size": 512, 00:21:55.491 "num_blocks": 2097152, 00:21:55.491 "uuid": "09a0e36c-4d19-4a6b-b079-9af9a939459d", 00:21:55.491 "numa_id": 0, 00:21:55.491 "assigned_rate_limits": { 00:21:55.491 "rw_ios_per_sec": 0, 00:21:55.491 "rw_mbytes_per_sec": 0, 00:21:55.491 "r_mbytes_per_sec": 0, 00:21:55.491 "w_mbytes_per_sec": 0 00:21:55.491 }, 00:21:55.491 "claimed": false, 00:21:55.491 "zoned": false, 00:21:55.491 "supported_io_types": { 00:21:55.491 "read": true, 00:21:55.491 "write": true, 00:21:55.491 "unmap": false, 00:21:55.491 "flush": true, 00:21:55.491 "reset": true, 00:21:55.491 "nvme_admin": true, 00:21:55.491 "nvme_io": true, 00:21:55.491 "nvme_io_md": false, 00:21:55.491 "write_zeroes": true, 00:21:55.492 "zcopy": false, 00:21:55.492 "get_zone_info": false, 00:21:55.492 "zone_management": false, 00:21:55.492 "zone_append": false, 00:21:55.492 "compare": true, 00:21:55.492 "compare_and_write": true, 00:21:55.492 "abort": true, 00:21:55.492 "seek_hole": false, 00:21:55.492 "seek_data": false, 00:21:55.492 "copy": true, 00:21:55.492 "nvme_iov_md": false 00:21:55.492 }, 00:21:55.492 "memory_domains": [ 00:21:55.492 { 00:21:55.492 "dma_device_id": "system", 00:21:55.492 "dma_device_type": 1 00:21:55.492 } 00:21:55.492 ], 00:21:55.492 "driver_specific": { 00:21:55.492 "nvme": [ 00:21:55.492 { 00:21:55.492 "trid": { 00:21:55.492 "trtype": "TCP", 00:21:55.492 "adrfam": "IPv4", 00:21:55.492 "traddr": "10.0.0.2", 00:21:55.492 "trsvcid": "4420", 00:21:55.492 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:55.492 }, 00:21:55.492 "ctrlr_data": { 00:21:55.492 "cntlid": 1, 00:21:55.492 "vendor_id": "0x8086", 00:21:55.492 "model_number": "SPDK bdev Controller", 00:21:55.492 "serial_number": "00000000000000000000", 00:21:55.492 "firmware_revision": "25.01", 00:21:55.492 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:55.492 "oacs": { 00:21:55.492 "security": 0, 00:21:55.492 "format": 0, 00:21:55.492 "firmware": 0, 00:21:55.492 "ns_manage": 0 00:21:55.492 }, 00:21:55.492 "multi_ctrlr": true, 00:21:55.492 "ana_reporting": false 00:21:55.492 }, 00:21:55.492 "vs": { 00:21:55.492 "nvme_version": "1.3" 00:21:55.492 }, 00:21:55.492 "ns_data": { 00:21:55.492 "id": 1, 00:21:55.492 "can_share": true 00:21:55.492 } 00:21:55.492 } 00:21:55.492 ], 00:21:55.492 "mp_policy": "active_passive" 00:21:55.492 } 00:21:55.492 } 00:21:55.492 ] 00:21:55.492 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.492 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:55.492 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.492 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.492 [2024-10-08 18:29:13.744502] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:55.492 [2024-10-08 18:29:13.744592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d09be0 (9): Bad file descriptor 00:21:55.749 [2024-10-08 18:29:13.876410] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:55.749 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.749 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:55.749 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.749 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.749 [ 00:21:55.749 { 00:21:55.749 "name": "nvme0n1", 00:21:55.749 "aliases": [ 00:21:55.749 "09a0e36c-4d19-4a6b-b079-9af9a939459d" 00:21:55.749 ], 00:21:55.749 "product_name": "NVMe disk", 00:21:55.749 "block_size": 512, 00:21:55.749 "num_blocks": 2097152, 00:21:55.749 "uuid": "09a0e36c-4d19-4a6b-b079-9af9a939459d", 00:21:55.749 "numa_id": 0, 00:21:55.749 "assigned_rate_limits": { 00:21:55.750 "rw_ios_per_sec": 0, 00:21:55.750 "rw_mbytes_per_sec": 0, 00:21:55.750 "r_mbytes_per_sec": 0, 00:21:55.750 "w_mbytes_per_sec": 0 00:21:55.750 }, 00:21:55.750 "claimed": false, 00:21:55.750 "zoned": false, 00:21:55.750 "supported_io_types": { 00:21:55.750 "read": true, 00:21:55.750 "write": true, 00:21:55.750 "unmap": false, 00:21:55.750 "flush": true, 00:21:55.750 "reset": true, 00:21:55.750 "nvme_admin": true, 00:21:55.750 "nvme_io": true, 00:21:55.750 "nvme_io_md": false, 00:21:55.750 "write_zeroes": true, 00:21:55.750 "zcopy": false, 00:21:55.750 "get_zone_info": false, 00:21:55.750 "zone_management": false, 00:21:55.750 "zone_append": false, 00:21:55.750 "compare": true, 00:21:55.750 "compare_and_write": true, 00:21:55.750 "abort": true, 00:21:55.750 "seek_hole": false, 00:21:55.750 "seek_data": false, 00:21:55.750 "copy": true, 00:21:55.750 "nvme_iov_md": false 00:21:55.750 }, 00:21:55.750 "memory_domains": [ 00:21:55.750 { 00:21:55.750 "dma_device_id": "system", 00:21:55.750 "dma_device_type": 1 00:21:55.750 } 00:21:55.750 ], 00:21:55.750 "driver_specific": { 00:21:55.750 "nvme": [ 00:21:55.750 { 00:21:55.750 "trid": { 00:21:55.750 "trtype": "TCP", 00:21:55.750 "adrfam": "IPv4", 00:21:55.750 "traddr": "10.0.0.2", 00:21:55.750 "trsvcid": "4420", 00:21:55.750 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:55.750 }, 00:21:55.750 "ctrlr_data": { 00:21:55.750 "cntlid": 2, 00:21:55.750 "vendor_id": "0x8086", 00:21:55.750 "model_number": "SPDK bdev Controller", 00:21:55.750 "serial_number": "00000000000000000000", 00:21:55.750 "firmware_revision": "25.01", 00:21:55.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:55.750 "oacs": { 00:21:55.750 "security": 0, 00:21:55.750 "format": 0, 00:21:55.750 "firmware": 0, 00:21:55.750 "ns_manage": 0 00:21:55.750 }, 00:21:55.750 "multi_ctrlr": true, 00:21:55.750 "ana_reporting": false 00:21:55.750 }, 00:21:55.750 "vs": { 00:21:55.750 "nvme_version": "1.3" 00:21:55.750 }, 00:21:55.750 "ns_data": { 00:21:55.750 "id": 1, 00:21:55.750 "can_share": true 00:21:55.750 } 00:21:55.750 } 00:21:55.750 ], 00:21:55.750 "mp_policy": "active_passive" 00:21:55.750 } 00:21:55.750 } 00:21:55.750 ] 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ZDWWLr0b8K 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ZDWWLr0b8K 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.ZDWWLr0b8K 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.750 [2024-10-08 18:29:13.929159] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:55.750 [2024-10-08 18:29:13.929266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.750 18:29:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.750 [2024-10-08 18:29:13.945203] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.750 nvme0n1 00:21:55.750 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.750 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:55.750 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.750 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:55.750 [ 00:21:55.750 { 00:21:55.750 "name": "nvme0n1", 00:21:55.750 "aliases": [ 00:21:55.750 "09a0e36c-4d19-4a6b-b079-9af9a939459d" 00:21:55.750 ], 00:21:55.750 "product_name": "NVMe disk", 00:21:55.750 "block_size": 512, 00:21:55.750 "num_blocks": 2097152, 00:21:55.750 "uuid": "09a0e36c-4d19-4a6b-b079-9af9a939459d", 00:21:55.750 "numa_id": 0, 00:21:55.750 "assigned_rate_limits": { 00:21:55.750 "rw_ios_per_sec": 0, 00:21:55.750 "rw_mbytes_per_sec": 0, 00:21:55.750 "r_mbytes_per_sec": 0, 00:21:55.750 "w_mbytes_per_sec": 0 00:21:55.750 }, 00:21:55.750 "claimed": false, 00:21:55.750 "zoned": false, 00:21:55.750 "supported_io_types": { 00:21:55.750 "read": true, 00:21:55.750 "write": true, 00:21:55.750 "unmap": false, 00:21:55.750 "flush": true, 00:21:55.750 "reset": true, 00:21:55.750 "nvme_admin": true, 00:21:55.750 "nvme_io": true, 00:21:55.750 "nvme_io_md": false, 00:21:55.750 "write_zeroes": true, 00:21:55.750 "zcopy": false, 00:21:55.750 "get_zone_info": false, 00:21:55.750 "zone_management": false, 00:21:55.750 "zone_append": false, 00:21:55.750 "compare": true, 00:21:55.750 "compare_and_write": true, 00:21:55.750 "abort": true, 00:21:55.750 "seek_hole": false, 00:21:55.750 "seek_data": false, 00:21:55.750 "copy": true, 00:21:55.750 "nvme_iov_md": false 00:21:55.750 }, 00:21:55.750 "memory_domains": [ 00:21:55.750 { 00:21:55.750 "dma_device_id": "system", 00:21:55.750 "dma_device_type": 1 00:21:55.750 } 00:21:55.750 ], 00:21:55.750 "driver_specific": { 00:21:55.750 "nvme": [ 00:21:55.750 { 00:21:55.750 "trid": { 00:21:55.750 "trtype": "TCP", 00:21:55.750 "adrfam": "IPv4", 00:21:55.750 "traddr": "10.0.0.2", 00:21:55.750 "trsvcid": "4421", 00:21:55.750 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:55.750 }, 00:21:55.750 "ctrlr_data": { 00:21:55.750 "cntlid": 3, 00:21:55.750 "vendor_id": "0x8086", 00:21:55.750 "model_number": "SPDK bdev Controller", 00:21:55.750 "serial_number": "00000000000000000000", 00:21:55.750 "firmware_revision": "25.01", 00:21:55.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:55.750 "oacs": { 00:21:55.750 "security": 0, 00:21:55.750 "format": 0, 00:21:55.750 "firmware": 0, 00:21:55.750 "ns_manage": 0 00:21:55.750 }, 00:21:55.750 "multi_ctrlr": true, 00:21:55.750 "ana_reporting": false 00:21:55.750 }, 00:21:55.750 "vs": { 00:21:55.750 "nvme_version": "1.3" 00:21:55.750 }, 00:21:55.750 "ns_data": { 00:21:55.750 "id": 1, 00:21:55.750 "can_share": true 00:21:55.750 } 00:21:55.750 } 00:21:55.750 ], 00:21:55.750 "mp_policy": "active_passive" 00:21:55.750 } 00:21:55.750 } 00:21:55.750 ] 00:21:55.750 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.750 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.750 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.750 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.ZDWWLr0b8K 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:56.008 rmmod nvme_tcp 00:21:56.008 rmmod nvme_fabrics 00:21:56.008 rmmod nvme_keyring 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1306277 ']' 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1306277 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1306277 ']' 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1306277 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1306277 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1306277' 00:21:56.008 killing process with pid 1306277 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1306277 00:21:56.008 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1306277 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.267 18:29:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.170 18:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:58.170 00:21:58.170 real 0m5.779s 00:21:58.170 user 0m2.247s 00:21:58.170 sys 0m1.963s 00:21:58.170 18:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.170 18:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:58.170 ************************************ 00:21:58.170 END TEST nvmf_async_init 00:21:58.170 ************************************ 00:21:58.170 18:29:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:58.170 18:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:58.170 18:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.170 18:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.428 ************************************ 00:21:58.428 START TEST dma 00:21:58.428 ************************************ 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:58.428 * Looking for test storage... 00:21:58.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:58.428 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:58.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.429 --rc genhtml_branch_coverage=1 00:21:58.429 --rc genhtml_function_coverage=1 00:21:58.429 --rc genhtml_legend=1 00:21:58.429 --rc geninfo_all_blocks=1 00:21:58.429 --rc geninfo_unexecuted_blocks=1 00:21:58.429 00:21:58.429 ' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:58.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.429 --rc genhtml_branch_coverage=1 00:21:58.429 --rc genhtml_function_coverage=1 00:21:58.429 --rc genhtml_legend=1 00:21:58.429 --rc geninfo_all_blocks=1 00:21:58.429 --rc geninfo_unexecuted_blocks=1 00:21:58.429 00:21:58.429 ' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:58.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.429 --rc genhtml_branch_coverage=1 00:21:58.429 --rc genhtml_function_coverage=1 00:21:58.429 --rc genhtml_legend=1 00:21:58.429 --rc geninfo_all_blocks=1 00:21:58.429 --rc geninfo_unexecuted_blocks=1 00:21:58.429 00:21:58.429 ' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:58.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.429 --rc genhtml_branch_coverage=1 00:21:58.429 --rc genhtml_function_coverage=1 00:21:58.429 --rc genhtml_legend=1 00:21:58.429 --rc geninfo_all_blocks=1 00:21:58.429 --rc geninfo_unexecuted_blocks=1 00:21:58.429 00:21:58.429 ' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:58.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:58.429 00:21:58.429 real 0m0.154s 00:21:58.429 user 0m0.091s 00:21:58.429 sys 0m0.072s 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:58.429 ************************************ 00:21:58.429 END TEST dma 00:21:58.429 ************************************ 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.429 ************************************ 00:21:58.429 START TEST nvmf_identify 00:21:58.429 ************************************ 00:21:58.429 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:58.688 * Looking for test storage... 00:21:58.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:58.688 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.689 --rc genhtml_branch_coverage=1 00:21:58.689 --rc genhtml_function_coverage=1 00:21:58.689 --rc genhtml_legend=1 00:21:58.689 --rc geninfo_all_blocks=1 00:21:58.689 --rc geninfo_unexecuted_blocks=1 00:21:58.689 00:21:58.689 ' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.689 --rc genhtml_branch_coverage=1 00:21:58.689 --rc genhtml_function_coverage=1 00:21:58.689 --rc genhtml_legend=1 00:21:58.689 --rc geninfo_all_blocks=1 00:21:58.689 --rc geninfo_unexecuted_blocks=1 00:21:58.689 00:21:58.689 ' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.689 --rc genhtml_branch_coverage=1 00:21:58.689 --rc genhtml_function_coverage=1 00:21:58.689 --rc genhtml_legend=1 00:21:58.689 --rc geninfo_all_blocks=1 00:21:58.689 --rc geninfo_unexecuted_blocks=1 00:21:58.689 00:21:58.689 ' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:58.689 --rc genhtml_branch_coverage=1 00:21:58.689 --rc genhtml_function_coverage=1 00:21:58.689 --rc genhtml_legend=1 00:21:58.689 --rc geninfo_all_blocks=1 00:21:58.689 --rc geninfo_unexecuted_blocks=1 00:21:58.689 00:21:58.689 ' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:58.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:58.689 18:29:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:01.224 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:01.224 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:01.224 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:01.225 Found net devices under 0000:09:00.0: cvl_0_0 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:01.225 Found net devices under 0000:09:00.1: cvl_0_1 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.225 18:29:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:01.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:22:01.225 00:22:01.225 --- 10.0.0.2 ping statistics --- 00:22:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.225 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:22:01.225 00:22:01.225 --- 10.0.0.1 ping statistics --- 00:22:01.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.225 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1308426 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1308426 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1308426 ']' 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.225 [2024-10-08 18:29:19.196334] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:22:01.225 [2024-10-08 18:29:19.196410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.225 [2024-10-08 18:29:19.260185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:01.225 [2024-10-08 18:29:19.369233] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.225 [2024-10-08 18:29:19.369285] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.225 [2024-10-08 18:29:19.369320] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.225 [2024-10-08 18:29:19.369333] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.225 [2024-10-08 18:29:19.369343] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.225 [2024-10-08 18:29:19.370924] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.225 [2024-10-08 18:29:19.370951] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:01.225 [2024-10-08 18:29:19.371004] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:01.225 [2024-10-08 18:29:19.371007] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.225 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.225 [2024-10-08 18:29:19.508750] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.484 Malloc0 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.484 [2024-10-08 18:29:19.590986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.484 [ 00:22:01.484 { 00:22:01.484 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:01.484 "subtype": "Discovery", 00:22:01.484 "listen_addresses": [ 00:22:01.484 { 00:22:01.484 "trtype": "TCP", 00:22:01.484 "adrfam": "IPv4", 00:22:01.484 "traddr": "10.0.0.2", 00:22:01.484 "trsvcid": "4420" 00:22:01.484 } 00:22:01.484 ], 00:22:01.484 "allow_any_host": true, 00:22:01.484 "hosts": [] 00:22:01.484 }, 00:22:01.484 { 00:22:01.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.484 "subtype": "NVMe", 00:22:01.484 "listen_addresses": [ 00:22:01.484 { 00:22:01.484 "trtype": "TCP", 00:22:01.484 "adrfam": "IPv4", 00:22:01.484 "traddr": "10.0.0.2", 00:22:01.484 "trsvcid": "4420" 00:22:01.484 } 00:22:01.484 ], 00:22:01.484 "allow_any_host": true, 00:22:01.484 "hosts": [], 00:22:01.484 "serial_number": "SPDK00000000000001", 00:22:01.484 "model_number": "SPDK bdev Controller", 00:22:01.484 "max_namespaces": 32, 00:22:01.484 "min_cntlid": 1, 00:22:01.484 "max_cntlid": 65519, 00:22:01.484 "namespaces": [ 00:22:01.484 { 00:22:01.484 "nsid": 1, 00:22:01.484 "bdev_name": "Malloc0", 00:22:01.484 "name": "Malloc0", 00:22:01.484 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:01.484 "eui64": "ABCDEF0123456789", 00:22:01.484 "uuid": "565bffd7-25ff-4a7c-aeda-c45447224032" 00:22:01.484 } 00:22:01.484 ] 00:22:01.484 } 00:22:01.484 ] 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.484 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:01.484 [2024-10-08 18:29:19.636293] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:22:01.484 [2024-10-08 18:29:19.636340] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308448 ] 00:22:01.484 [2024-10-08 18:29:19.670910] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:01.484 [2024-10-08 18:29:19.670977] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:01.484 [2024-10-08 18:29:19.670988] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:01.484 [2024-10-08 18:29:19.671007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:01.484 [2024-10-08 18:29:19.671021] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:01.484 [2024-10-08 18:29:19.674769] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:01.484 [2024-10-08 18:29:19.674839] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xccf760 0 00:22:01.484 [2024-10-08 18:29:19.682298] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:01.484 [2024-10-08 18:29:19.682326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:01.484 [2024-10-08 18:29:19.682337] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:01.484 [2024-10-08 18:29:19.682343] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:01.484 [2024-10-08 18:29:19.682381] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.484 [2024-10-08 18:29:19.682393] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.484 [2024-10-08 18:29:19.682401] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccf760) 00:22:01.484 [2024-10-08 18:29:19.682418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:01.484 [2024-10-08 18:29:19.682445] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f480, cid 0, qid 0 00:22:01.484 [2024-10-08 18:29:19.689301] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.484 [2024-10-08 18:29:19.689320] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.484 [2024-10-08 18:29:19.689328] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.484 [2024-10-08 18:29:19.689336] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f480) on tqpair=0xccf760 00:22:01.484 [2024-10-08 18:29:19.689357] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:01.485 [2024-10-08 18:29:19.689369] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:01.485 [2024-10-08 18:29:19.689379] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:01.485 [2024-10-08 18:29:19.689400] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.689409] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.689415] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccf760) 00:22:01.485 [2024-10-08 18:29:19.689427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.485 [2024-10-08 18:29:19.689451] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f480, cid 0, qid 0 00:22:01.485 [2024-10-08 18:29:19.689551] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.485 [2024-10-08 18:29:19.689566] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.485 [2024-10-08 18:29:19.689573] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.689581] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f480) on tqpair=0xccf760 00:22:01.485 [2024-10-08 18:29:19.689591] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:01.485 [2024-10-08 18:29:19.689604] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:01.485 [2024-10-08 18:29:19.689617] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.689624] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.689631] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccf760) 00:22:01.485 [2024-10-08 18:29:19.689647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.485 [2024-10-08 18:29:19.689670] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f480, cid 0, qid 0 00:22:01.485 [2024-10-08 18:29:19.689791] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.485 [2024-10-08 18:29:19.689805] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.485 [2024-10-08 18:29:19.689812] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.689819] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f480) on tqpair=0xccf760 00:22:01.485 [2024-10-08 18:29:19.689828] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:01.485 [2024-10-08 18:29:19.689843] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:01.485 [2024-10-08 18:29:19.689855] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.689863] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.689869] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccf760) 00:22:01.485 [2024-10-08 18:29:19.689880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.485 [2024-10-08 18:29:19.689901] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f480, cid 0, qid 0 00:22:01.485 [2024-10-08 18:29:19.690013] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.485 [2024-10-08 18:29:19.690027] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.485 [2024-10-08 18:29:19.690034] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690041] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f480) on tqpair=0xccf760 00:22:01.485 [2024-10-08 18:29:19.690050] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:01.485 [2024-10-08 18:29:19.690066] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690075] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690082] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccf760) 00:22:01.485 [2024-10-08 18:29:19.690093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.485 [2024-10-08 18:29:19.690115] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f480, cid 0, qid 0 00:22:01.485 [2024-10-08 18:29:19.690227] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.485 [2024-10-08 18:29:19.690241] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.485 [2024-10-08 18:29:19.690248] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690255] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f480) on tqpair=0xccf760 00:22:01.485 [2024-10-08 18:29:19.690264] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:01.485 [2024-10-08 18:29:19.690273] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:01.485 [2024-10-08 18:29:19.690295] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:01.485 [2024-10-08 18:29:19.690408] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:01.485 [2024-10-08 18:29:19.690417] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:01.485 [2024-10-08 18:29:19.690436] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690445] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690452] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccf760) 00:22:01.485 [2024-10-08 18:29:19.690463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.485 [2024-10-08 18:29:19.690485] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f480, cid 0, qid 0 00:22:01.485 [2024-10-08 18:29:19.690576] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.485 [2024-10-08 18:29:19.690590] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.485 [2024-10-08 18:29:19.690597] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690604] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f480) on tqpair=0xccf760 00:22:01.485 [2024-10-08 18:29:19.690613] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:01.485 [2024-10-08 18:29:19.690630] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690641] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690648] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccf760) 00:22:01.485 [2024-10-08 18:29:19.690658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.485 [2024-10-08 18:29:19.690681] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f480, cid 0, qid 0 00:22:01.485 [2024-10-08 18:29:19.690792] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.485 [2024-10-08 18:29:19.690807] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.485 [2024-10-08 18:29:19.690815] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690822] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f480) on tqpair=0xccf760 00:22:01.485 [2024-10-08 18:29:19.690830] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:01.485 [2024-10-08 18:29:19.690840] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:01.485 [2024-10-08 18:29:19.690853] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:01.485 [2024-10-08 18:29:19.690868] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:01.485 [2024-10-08 18:29:19.690886] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.690896] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccf760) 00:22:01.485 [2024-10-08 18:29:19.690908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.485 [2024-10-08 18:29:19.690930] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f480, cid 0, qid 0 00:22:01.485 [2024-10-08 18:29:19.691066] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.485 [2024-10-08 18:29:19.691081] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.485 [2024-10-08 18:29:19.691088] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.691095] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccf760): datao=0, datal=4096, cccid=0 00:22:01.485 [2024-10-08 18:29:19.691103] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2f480) on tqpair(0xccf760): expected_datao=0, payload_size=4096 00:22:01.485 [2024-10-08 18:29:19.691111] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.691126] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.691135] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.691147] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.485 [2024-10-08 18:29:19.691157] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.485 [2024-10-08 18:29:19.691164] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.691171] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f480) on tqpair=0xccf760 00:22:01.485 [2024-10-08 18:29:19.691184] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:01.485 [2024-10-08 18:29:19.691193] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:01.485 [2024-10-08 18:29:19.691200] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:01.485 [2024-10-08 18:29:19.691209] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:01.485 [2024-10-08 18:29:19.691217] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:01.485 [2024-10-08 18:29:19.691225] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:01.485 [2024-10-08 18:29:19.691244] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:01.485 [2024-10-08 18:29:19.691258] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.691265] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.691272] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccf760) 00:22:01.485 [2024-10-08 18:29:19.691283] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.485 [2024-10-08 18:29:19.691315] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f480, cid 0, qid 0 00:22:01.485 [2024-10-08 18:29:19.691424] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.485 [2024-10-08 18:29:19.691437] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.485 [2024-10-08 18:29:19.691444] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.485 [2024-10-08 18:29:19.691450] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f480) on tqpair=0xccf760 00:22:01.485 [2024-10-08 18:29:19.691462] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691470] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691476] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xccf760) 00:22:01.486 [2024-10-08 18:29:19.691486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.486 [2024-10-08 18:29:19.691497] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691503] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691510] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xccf760) 00:22:01.486 [2024-10-08 18:29:19.691518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.486 [2024-10-08 18:29:19.691528] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691535] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691541] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xccf760) 00:22:01.486 [2024-10-08 18:29:19.691550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.486 [2024-10-08 18:29:19.691566] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691573] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691580] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccf760) 00:22:01.486 [2024-10-08 18:29:19.691589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.486 [2024-10-08 18:29:19.691598] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:01.486 [2024-10-08 18:29:19.691617] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:01.486 [2024-10-08 18:29:19.691630] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691637] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccf760) 00:22:01.486 [2024-10-08 18:29:19.691648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.486 [2024-10-08 18:29:19.691671] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f480, cid 0, qid 0 00:22:01.486 [2024-10-08 18:29:19.691682] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f600, cid 1, qid 0 00:22:01.486 [2024-10-08 18:29:19.691690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f780, cid 2, qid 0 00:22:01.486 [2024-10-08 18:29:19.691697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f900, cid 3, qid 0 00:22:01.486 [2024-10-08 18:29:19.691705] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2fa80, cid 4, qid 0 00:22:01.486 [2024-10-08 18:29:19.691831] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.486 [2024-10-08 18:29:19.691843] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.486 [2024-10-08 18:29:19.691850] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691857] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2fa80) on tqpair=0xccf760 00:22:01.486 [2024-10-08 18:29:19.691866] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:01.486 [2024-10-08 18:29:19.691874] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:01.486 [2024-10-08 18:29:19.691892] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.691902] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccf760) 00:22:01.486 [2024-10-08 18:29:19.691912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.486 [2024-10-08 18:29:19.691933] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2fa80, cid 4, qid 0 00:22:01.486 [2024-10-08 18:29:19.692036] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.486 [2024-10-08 18:29:19.692050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.486 [2024-10-08 18:29:19.692057] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692065] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccf760): datao=0, datal=4096, cccid=4 00:22:01.486 [2024-10-08 18:29:19.692072] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2fa80) on tqpair(0xccf760): expected_datao=0, payload_size=4096 00:22:01.486 [2024-10-08 18:29:19.692080] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692096] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692106] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.486 [2024-10-08 18:29:19.692128] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.486 [2024-10-08 18:29:19.692140] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692147] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2fa80) on tqpair=0xccf760 00:22:01.486 [2024-10-08 18:29:19.692166] nvme_ctrlr.c:4220:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:01.486 [2024-10-08 18:29:19.692206] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692218] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccf760) 00:22:01.486 [2024-10-08 18:29:19.692229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.486 [2024-10-08 18:29:19.692241] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692248] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692254] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xccf760) 00:22:01.486 [2024-10-08 18:29:19.692263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.486 [2024-10-08 18:29:19.692294] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2fa80, cid 4, qid 0 00:22:01.486 [2024-10-08 18:29:19.692307] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2fc00, cid 5, qid 0 00:22:01.486 [2024-10-08 18:29:19.692438] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.486 [2024-10-08 18:29:19.692452] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.486 [2024-10-08 18:29:19.692459] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692466] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccf760): datao=0, datal=1024, cccid=4 00:22:01.486 [2024-10-08 18:29:19.692473] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2fa80) on tqpair(0xccf760): expected_datao=0, payload_size=1024 00:22:01.486 [2024-10-08 18:29:19.692481] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692490] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692498] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692507] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.486 [2024-10-08 18:29:19.692516] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.486 [2024-10-08 18:29:19.692523] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.692529] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2fc00) on tqpair=0xccf760 00:22:01.486 [2024-10-08 18:29:19.737299] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.486 [2024-10-08 18:29:19.737319] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.486 [2024-10-08 18:29:19.737327] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737334] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2fa80) on tqpair=0xccf760 00:22:01.486 [2024-10-08 18:29:19.737358] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737369] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccf760) 00:22:01.486 [2024-10-08 18:29:19.737381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.486 [2024-10-08 18:29:19.737412] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2fa80, cid 4, qid 0 00:22:01.486 [2024-10-08 18:29:19.737523] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.486 [2024-10-08 18:29:19.737537] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.486 [2024-10-08 18:29:19.737545] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737551] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccf760): datao=0, datal=3072, cccid=4 00:22:01.486 [2024-10-08 18:29:19.737563] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2fa80) on tqpair(0xccf760): expected_datao=0, payload_size=3072 00:22:01.486 [2024-10-08 18:29:19.737572] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737582] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737590] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737602] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.486 [2024-10-08 18:29:19.737612] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.486 [2024-10-08 18:29:19.737619] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737626] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2fa80) on tqpair=0xccf760 00:22:01.486 [2024-10-08 18:29:19.737641] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737650] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xccf760) 00:22:01.486 [2024-10-08 18:29:19.737661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.486 [2024-10-08 18:29:19.737690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2fa80, cid 4, qid 0 00:22:01.486 [2024-10-08 18:29:19.737827] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.486 [2024-10-08 18:29:19.737840] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.486 [2024-10-08 18:29:19.737847] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737853] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xccf760): datao=0, datal=8, cccid=4 00:22:01.486 [2024-10-08 18:29:19.737860] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2fa80) on tqpair(0xccf760): expected_datao=0, payload_size=8 00:22:01.486 [2024-10-08 18:29:19.737868] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737877] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.486 [2024-10-08 18:29:19.737885] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.748 [2024-10-08 18:29:19.778375] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.748 [2024-10-08 18:29:19.778395] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.748 [2024-10-08 18:29:19.778403] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.748 [2024-10-08 18:29:19.778410] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2fa80) on tqpair=0xccf760 00:22:01.748 ===================================================== 00:22:01.748 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:01.748 ===================================================== 00:22:01.748 Controller Capabilities/Features 00:22:01.748 ================================ 00:22:01.748 Vendor ID: 0000 00:22:01.748 Subsystem Vendor ID: 0000 00:22:01.748 Serial Number: .................... 00:22:01.748 Model Number: ........................................ 00:22:01.748 Firmware Version: 25.01 00:22:01.748 Recommended Arb Burst: 0 00:22:01.748 IEEE OUI Identifier: 00 00 00 00:22:01.748 Multi-path I/O 00:22:01.748 May have multiple subsystem ports: No 00:22:01.748 May have multiple controllers: No 00:22:01.748 Associated with SR-IOV VF: No 00:22:01.748 Max Data Transfer Size: 131072 00:22:01.748 Max Number of Namespaces: 0 00:22:01.748 Max Number of I/O Queues: 1024 00:22:01.748 NVMe Specification Version (VS): 1.3 00:22:01.748 NVMe Specification Version (Identify): 1.3 00:22:01.748 Maximum Queue Entries: 128 00:22:01.748 Contiguous Queues Required: Yes 00:22:01.748 Arbitration Mechanisms Supported 00:22:01.748 Weighted Round Robin: Not Supported 00:22:01.748 Vendor Specific: Not Supported 00:22:01.748 Reset Timeout: 15000 ms 00:22:01.748 Doorbell Stride: 4 bytes 00:22:01.748 NVM Subsystem Reset: Not Supported 00:22:01.748 Command Sets Supported 00:22:01.748 NVM Command Set: Supported 00:22:01.748 Boot Partition: Not Supported 00:22:01.748 Memory Page Size Minimum: 4096 bytes 00:22:01.748 Memory Page Size Maximum: 4096 bytes 00:22:01.748 Persistent Memory Region: Not Supported 00:22:01.748 Optional Asynchronous Events Supported 00:22:01.748 Namespace Attribute Notices: Not Supported 00:22:01.748 Firmware Activation Notices: Not Supported 00:22:01.748 ANA Change Notices: Not Supported 00:22:01.748 PLE Aggregate Log Change Notices: Not Supported 00:22:01.748 LBA Status Info Alert Notices: Not Supported 00:22:01.748 EGE Aggregate Log Change Notices: Not Supported 00:22:01.748 Normal NVM Subsystem Shutdown event: Not Supported 00:22:01.748 Zone Descriptor Change Notices: Not Supported 00:22:01.748 Discovery Log Change Notices: Supported 00:22:01.748 Controller Attributes 00:22:01.748 128-bit Host Identifier: Not Supported 00:22:01.748 Non-Operational Permissive Mode: Not Supported 00:22:01.748 NVM Sets: Not Supported 00:22:01.748 Read Recovery Levels: Not Supported 00:22:01.748 Endurance Groups: Not Supported 00:22:01.748 Predictable Latency Mode: Not Supported 00:22:01.748 Traffic Based Keep ALive: Not Supported 00:22:01.748 Namespace Granularity: Not Supported 00:22:01.748 SQ Associations: Not Supported 00:22:01.748 UUID List: Not Supported 00:22:01.748 Multi-Domain Subsystem: Not Supported 00:22:01.748 Fixed Capacity Management: Not Supported 00:22:01.748 Variable Capacity Management: Not Supported 00:22:01.748 Delete Endurance Group: Not Supported 00:22:01.748 Delete NVM Set: Not Supported 00:22:01.748 Extended LBA Formats Supported: Not Supported 00:22:01.748 Flexible Data Placement Supported: Not Supported 00:22:01.748 00:22:01.748 Controller Memory Buffer Support 00:22:01.748 ================================ 00:22:01.748 Supported: No 00:22:01.748 00:22:01.748 Persistent Memory Region Support 00:22:01.748 ================================ 00:22:01.748 Supported: No 00:22:01.748 00:22:01.748 Admin Command Set Attributes 00:22:01.748 ============================ 00:22:01.748 Security Send/Receive: Not Supported 00:22:01.748 Format NVM: Not Supported 00:22:01.748 Firmware Activate/Download: Not Supported 00:22:01.748 Namespace Management: Not Supported 00:22:01.748 Device Self-Test: Not Supported 00:22:01.748 Directives: Not Supported 00:22:01.748 NVMe-MI: Not Supported 00:22:01.748 Virtualization Management: Not Supported 00:22:01.748 Doorbell Buffer Config: Not Supported 00:22:01.748 Get LBA Status Capability: Not Supported 00:22:01.748 Command & Feature Lockdown Capability: Not Supported 00:22:01.748 Abort Command Limit: 1 00:22:01.748 Async Event Request Limit: 4 00:22:01.748 Number of Firmware Slots: N/A 00:22:01.748 Firmware Slot 1 Read-Only: N/A 00:22:01.748 Firmware Activation Without Reset: N/A 00:22:01.748 Multiple Update Detection Support: N/A 00:22:01.748 Firmware Update Granularity: No Information Provided 00:22:01.748 Per-Namespace SMART Log: No 00:22:01.748 Asymmetric Namespace Access Log Page: Not Supported 00:22:01.748 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:01.748 Command Effects Log Page: Not Supported 00:22:01.748 Get Log Page Extended Data: Supported 00:22:01.748 Telemetry Log Pages: Not Supported 00:22:01.748 Persistent Event Log Pages: Not Supported 00:22:01.748 Supported Log Pages Log Page: May Support 00:22:01.748 Commands Supported & Effects Log Page: Not Supported 00:22:01.748 Feature Identifiers & Effects Log Page:May Support 00:22:01.748 NVMe-MI Commands & Effects Log Page: May Support 00:22:01.748 Data Area 4 for Telemetry Log: Not Supported 00:22:01.748 Error Log Page Entries Supported: 128 00:22:01.748 Keep Alive: Not Supported 00:22:01.748 00:22:01.748 NVM Command Set Attributes 00:22:01.748 ========================== 00:22:01.748 Submission Queue Entry Size 00:22:01.748 Max: 1 00:22:01.748 Min: 1 00:22:01.748 Completion Queue Entry Size 00:22:01.748 Max: 1 00:22:01.748 Min: 1 00:22:01.748 Number of Namespaces: 0 00:22:01.748 Compare Command: Not Supported 00:22:01.748 Write Uncorrectable Command: Not Supported 00:22:01.748 Dataset Management Command: Not Supported 00:22:01.748 Write Zeroes Command: Not Supported 00:22:01.748 Set Features Save Field: Not Supported 00:22:01.748 Reservations: Not Supported 00:22:01.748 Timestamp: Not Supported 00:22:01.748 Copy: Not Supported 00:22:01.748 Volatile Write Cache: Not Present 00:22:01.748 Atomic Write Unit (Normal): 1 00:22:01.748 Atomic Write Unit (PFail): 1 00:22:01.748 Atomic Compare & Write Unit: 1 00:22:01.748 Fused Compare & Write: Supported 00:22:01.748 Scatter-Gather List 00:22:01.748 SGL Command Set: Supported 00:22:01.748 SGL Keyed: Supported 00:22:01.748 SGL Bit Bucket Descriptor: Not Supported 00:22:01.748 SGL Metadata Pointer: Not Supported 00:22:01.748 Oversized SGL: Not Supported 00:22:01.748 SGL Metadata Address: Not Supported 00:22:01.748 SGL Offset: Supported 00:22:01.748 Transport SGL Data Block: Not Supported 00:22:01.748 Replay Protected Memory Block: Not Supported 00:22:01.748 00:22:01.748 Firmware Slot Information 00:22:01.748 ========================= 00:22:01.748 Active slot: 0 00:22:01.748 00:22:01.748 00:22:01.748 Error Log 00:22:01.748 ========= 00:22:01.748 00:22:01.748 Active Namespaces 00:22:01.748 ================= 00:22:01.748 Discovery Log Page 00:22:01.748 ================== 00:22:01.748 Generation Counter: 2 00:22:01.748 Number of Records: 2 00:22:01.748 Record Format: 0 00:22:01.748 00:22:01.748 Discovery Log Entry 0 00:22:01.748 ---------------------- 00:22:01.748 Transport Type: 3 (TCP) 00:22:01.748 Address Family: 1 (IPv4) 00:22:01.748 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:01.748 Entry Flags: 00:22:01.748 Duplicate Returned Information: 1 00:22:01.748 Explicit Persistent Connection Support for Discovery: 1 00:22:01.748 Transport Requirements: 00:22:01.748 Secure Channel: Not Required 00:22:01.748 Port ID: 0 (0x0000) 00:22:01.748 Controller ID: 65535 (0xffff) 00:22:01.748 Admin Max SQ Size: 128 00:22:01.748 Transport Service Identifier: 4420 00:22:01.748 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:01.748 Transport Address: 10.0.0.2 00:22:01.748 Discovery Log Entry 1 00:22:01.748 ---------------------- 00:22:01.748 Transport Type: 3 (TCP) 00:22:01.748 Address Family: 1 (IPv4) 00:22:01.748 Subsystem Type: 2 (NVM Subsystem) 00:22:01.748 Entry Flags: 00:22:01.748 Duplicate Returned Information: 0 00:22:01.748 Explicit Persistent Connection Support for Discovery: 0 00:22:01.748 Transport Requirements: 00:22:01.748 Secure Channel: Not Required 00:22:01.748 Port ID: 0 (0x0000) 00:22:01.748 Controller ID: 65535 (0xffff) 00:22:01.748 Admin Max SQ Size: 128 00:22:01.748 Transport Service Identifier: 4420 00:22:01.748 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:01.748 Transport Address: 10.0.0.2 [2024-10-08 18:29:19.778521] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:01.748 [2024-10-08 18:29:19.778543] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f480) on tqpair=0xccf760 00:22:01.748 [2024-10-08 18:29:19.778555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.749 [2024-10-08 18:29:19.778564] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f600) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.778572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.749 [2024-10-08 18:29:19.778580] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f780) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.778588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.749 [2024-10-08 18:29:19.778596] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f900) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.778603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.749 [2024-10-08 18:29:19.778617] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.778625] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.778642] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccf760) 00:22:01.749 [2024-10-08 18:29:19.778655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.778680] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f900, cid 3, qid 0 00:22:01.749 [2024-10-08 18:29:19.778754] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.778768] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.778776] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.778782] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f900) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.778794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.778802] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.778808] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccf760) 00:22:01.749 [2024-10-08 18:29:19.778819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.778846] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f900, cid 3, qid 0 00:22:01.749 [2024-10-08 18:29:19.778987] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.779000] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.779007] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779014] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f900) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.779022] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:01.749 [2024-10-08 18:29:19.779030] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:01.749 [2024-10-08 18:29:19.779052] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779069] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccf760) 00:22:01.749 [2024-10-08 18:29:19.779079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.779101] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f900, cid 3, qid 0 00:22:01.749 [2024-10-08 18:29:19.779232] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.779244] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.779251] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779258] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f900) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.779274] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779284] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779303] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccf760) 00:22:01.749 [2024-10-08 18:29:19.779314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.779336] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f900, cid 3, qid 0 00:22:01.749 [2024-10-08 18:29:19.779445] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.779458] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.779465] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f900) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.779493] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779503] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779509] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccf760) 00:22:01.749 [2024-10-08 18:29:19.779520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.779541] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f900, cid 3, qid 0 00:22:01.749 [2024-10-08 18:29:19.779619] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.779633] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.779640] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779647] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f900) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.779663] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779672] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779679] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccf760) 00:22:01.749 [2024-10-08 18:29:19.779689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.779710] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f900, cid 3, qid 0 00:22:01.749 [2024-10-08 18:29:19.779794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.779808] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.779815] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779822] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f900) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.779838] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.779854] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccf760) 00:22:01.749 [2024-10-08 18:29:19.779864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.779885] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f900, cid 3, qid 0 00:22:01.749 [2024-10-08 18:29:19.779997] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.780011] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.780018] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.780025] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f900) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.780041] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.780050] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.780057] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccf760) 00:22:01.749 [2024-10-08 18:29:19.780068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.780088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f900, cid 3, qid 0 00:22:01.749 [2024-10-08 18:29:19.780199] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.780211] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.780218] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.780225] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f900) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.780241] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.780254] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.780261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xccf760) 00:22:01.749 [2024-10-08 18:29:19.780272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.784310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2f900, cid 3, qid 0 00:22:01.749 [2024-10-08 18:29:19.784333] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.784345] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.784352] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.784359] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd2f900) on tqpair=0xccf760 00:22:01.749 [2024-10-08 18:29:19.784373] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:01.749 00:22:01.749 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:01.749 [2024-10-08 18:29:19.822561] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:22:01.749 [2024-10-08 18:29:19.822607] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308571 ] 00:22:01.749 [2024-10-08 18:29:19.856493] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:01.749 [2024-10-08 18:29:19.856550] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:01.749 [2024-10-08 18:29:19.856561] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:01.749 [2024-10-08 18:29:19.856580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:01.749 [2024-10-08 18:29:19.856609] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:01.749 [2024-10-08 18:29:19.860731] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:01.749 [2024-10-08 18:29:19.860787] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2236760 0 00:22:01.749 [2024-10-08 18:29:19.867296] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:01.749 [2024-10-08 18:29:19.867322] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:01.749 [2024-10-08 18:29:19.867332] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:01.749 [2024-10-08 18:29:19.867339] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:01.749 [2024-10-08 18:29:19.867381] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.867394] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.867401] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236760) 00:22:01.749 [2024-10-08 18:29:19.867421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:01.749 [2024-10-08 18:29:19.867449] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296480, cid 0, qid 0 00:22:01.749 [2024-10-08 18:29:19.874301] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.874320] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.874328] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.874341] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296480) on tqpair=0x2236760 00:22:01.749 [2024-10-08 18:29:19.874356] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:01.749 [2024-10-08 18:29:19.874382] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:01.749 [2024-10-08 18:29:19.874391] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:01.749 [2024-10-08 18:29:19.874409] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.874418] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.874425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236760) 00:22:01.749 [2024-10-08 18:29:19.874436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.874461] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296480, cid 0, qid 0 00:22:01.749 [2024-10-08 18:29:19.874587] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.874602] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.874610] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.874617] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296480) on tqpair=0x2236760 00:22:01.749 [2024-10-08 18:29:19.874625] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:01.749 [2024-10-08 18:29:19.874638] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:01.749 [2024-10-08 18:29:19.874651] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.874658] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.874665] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236760) 00:22:01.749 [2024-10-08 18:29:19.874675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.874697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296480, cid 0, qid 0 00:22:01.749 [2024-10-08 18:29:19.874796] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.874808] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.874816] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.874823] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296480) on tqpair=0x2236760 00:22:01.749 [2024-10-08 18:29:19.874831] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:01.749 [2024-10-08 18:29:19.874845] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:01.749 [2024-10-08 18:29:19.874857] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.874865] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.874871] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236760) 00:22:01.749 [2024-10-08 18:29:19.874882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.874903] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296480, cid 0, qid 0 00:22:01.749 [2024-10-08 18:29:19.874982] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.874996] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.875003] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875010] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296480) on tqpair=0x2236760 00:22:01.749 [2024-10-08 18:29:19.875023] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:01.749 [2024-10-08 18:29:19.875041] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875050] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875057] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236760) 00:22:01.749 [2024-10-08 18:29:19.875067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.875088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296480, cid 0, qid 0 00:22:01.749 [2024-10-08 18:29:19.875185] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.875199] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.875206] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875213] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296480) on tqpair=0x2236760 00:22:01.749 [2024-10-08 18:29:19.875221] nvme_ctrlr.c:3924:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:01.749 [2024-10-08 18:29:19.875230] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:01.749 [2024-10-08 18:29:19.875243] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:01.749 [2024-10-08 18:29:19.875353] nvme_ctrlr.c:4122:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:01.749 [2024-10-08 18:29:19.875376] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:01.749 [2024-10-08 18:29:19.875398] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875406] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875412] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236760) 00:22:01.749 [2024-10-08 18:29:19.875423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.875446] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296480, cid 0, qid 0 00:22:01.749 [2024-10-08 18:29:19.875561] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.875575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.875583] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875590] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296480) on tqpair=0x2236760 00:22:01.749 [2024-10-08 18:29:19.875598] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:01.749 [2024-10-08 18:29:19.875615] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875624] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875630] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236760) 00:22:01.749 [2024-10-08 18:29:19.875641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.875662] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296480, cid 0, qid 0 00:22:01.749 [2024-10-08 18:29:19.875786] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.749 [2024-10-08 18:29:19.875799] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.749 [2024-10-08 18:29:19.875806] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875818] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296480) on tqpair=0x2236760 00:22:01.749 [2024-10-08 18:29:19.875826] nvme_ctrlr.c:3959:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:01.749 [2024-10-08 18:29:19.875834] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:01.749 [2024-10-08 18:29:19.875847] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:01.749 [2024-10-08 18:29:19.875866] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:01.749 [2024-10-08 18:29:19.875881] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.749 [2024-10-08 18:29:19.875889] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236760) 00:22:01.749 [2024-10-08 18:29:19.875900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.749 [2024-10-08 18:29:19.875922] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296480, cid 0, qid 0 00:22:01.749 [2024-10-08 18:29:19.876048] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.749 [2024-10-08 18:29:19.876063] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.749 [2024-10-08 18:29:19.876071] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876078] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236760): datao=0, datal=4096, cccid=0 00:22:01.750 [2024-10-08 18:29:19.876085] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2296480) on tqpair(0x2236760): expected_datao=0, payload_size=4096 00:22:01.750 [2024-10-08 18:29:19.876093] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876103] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876111] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.876133] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.876141] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876147] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296480) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.876159] nvme_ctrlr.c:2097:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:01.750 [2024-10-08 18:29:19.876167] nvme_ctrlr.c:2101:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:01.750 [2024-10-08 18:29:19.876175] nvme_ctrlr.c:2104:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:01.750 [2024-10-08 18:29:19.876182] nvme_ctrlr.c:2128:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:01.750 [2024-10-08 18:29:19.876189] nvme_ctrlr.c:2143:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:01.750 [2024-10-08 18:29:19.876197] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.876216] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.876230] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876237] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876244] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.876255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.750 [2024-10-08 18:29:19.876280] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296480, cid 0, qid 0 00:22:01.750 [2024-10-08 18:29:19.876420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.876435] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.876442] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876449] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296480) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.876460] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876468] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876474] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.876485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.750 [2024-10-08 18:29:19.876495] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876502] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876508] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.876517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.750 [2024-10-08 18:29:19.876526] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876533] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876539] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.876548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.750 [2024-10-08 18:29:19.876557] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876564] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876570] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.876579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.750 [2024-10-08 18:29:19.876587] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.876606] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.876619] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876627] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.876637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.750 [2024-10-08 18:29:19.876660] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296480, cid 0, qid 0 00:22:01.750 [2024-10-08 18:29:19.876672] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296600, cid 1, qid 0 00:22:01.750 [2024-10-08 18:29:19.876679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296780, cid 2, qid 0 00:22:01.750 [2024-10-08 18:29:19.876687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.750 [2024-10-08 18:29:19.876694] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296a80, cid 4, qid 0 00:22:01.750 [2024-10-08 18:29:19.876870] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.876884] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.876892] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876903] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296a80) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.876912] nvme_ctrlr.c:3077:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:01.750 [2024-10-08 18:29:19.876921] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.876934] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.876949] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.876961] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876968] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.876974] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.876985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:01.750 [2024-10-08 18:29:19.877007] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296a80, cid 4, qid 0 00:22:01.750 [2024-10-08 18:29:19.877088] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.877103] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.877110] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877117] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296a80) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.877182] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.877201] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.877216] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877224] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.877234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.750 [2024-10-08 18:29:19.877256] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296a80, cid 4, qid 0 00:22:01.750 [2024-10-08 18:29:19.877392] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.750 [2024-10-08 18:29:19.877408] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.750 [2024-10-08 18:29:19.877415] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877422] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236760): datao=0, datal=4096, cccid=4 00:22:01.750 [2024-10-08 18:29:19.877429] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2296a80) on tqpair(0x2236760): expected_datao=0, payload_size=4096 00:22:01.750 [2024-10-08 18:29:19.877437] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877447] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877455] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877474] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.877486] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.877494] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296a80) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.877521] nvme_ctrlr.c:4753:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:01.750 [2024-10-08 18:29:19.877538] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.877560] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.877575] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877583] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.877593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.750 [2024-10-08 18:29:19.877615] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296a80, cid 4, qid 0 00:22:01.750 [2024-10-08 18:29:19.877751] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.750 [2024-10-08 18:29:19.877764] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.750 [2024-10-08 18:29:19.877771] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877778] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236760): datao=0, datal=4096, cccid=4 00:22:01.750 [2024-10-08 18:29:19.877785] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2296a80) on tqpair(0x2236760): expected_datao=0, payload_size=4096 00:22:01.750 [2024-10-08 18:29:19.877792] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877802] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877810] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877830] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.877843] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.877852] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877859] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296a80) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.877882] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.877901] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.877916] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.877924] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.877934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.750 [2024-10-08 18:29:19.877957] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296a80, cid 4, qid 0 00:22:01.750 [2024-10-08 18:29:19.878094] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.750 [2024-10-08 18:29:19.878108] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.750 [2024-10-08 18:29:19.878116] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.878122] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236760): datao=0, datal=4096, cccid=4 00:22:01.750 [2024-10-08 18:29:19.878129] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2296a80) on tqpair(0x2236760): expected_datao=0, payload_size=4096 00:22:01.750 [2024-10-08 18:29:19.878137] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.878147] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.878155] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.878174] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.878186] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.878193] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.878204] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296a80) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.878219] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.878234] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.878256] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.878268] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.878276] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.878284] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.882306] nvme_ctrlr.c:3165:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:01.750 [2024-10-08 18:29:19.882316] nvme_ctrlr.c:1577:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:01.750 [2024-10-08 18:29:19.882325] nvme_ctrlr.c:1583:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:01.750 [2024-10-08 18:29:19.882344] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.882353] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.882364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.750 [2024-10-08 18:29:19.882375] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.882382] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.882388] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.882397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.750 [2024-10-08 18:29:19.882420] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296a80, cid 4, qid 0 00:22:01.750 [2024-10-08 18:29:19.882432] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296c00, cid 5, qid 0 00:22:01.750 [2024-10-08 18:29:19.882560] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.882575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.882582] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.882589] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296a80) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.882599] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.882609] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.882616] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.882623] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296c00) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.882638] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.882648] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.882659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.750 [2024-10-08 18:29:19.882680] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296c00, cid 5, qid 0 00:22:01.750 [2024-10-08 18:29:19.882759] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.882778] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.882787] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.882794] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296c00) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.882810] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.882819] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2236760) 00:22:01.750 [2024-10-08 18:29:19.882830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.750 [2024-10-08 18:29:19.882851] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296c00, cid 5, qid 0 00:22:01.750 [2024-10-08 18:29:19.882960] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.750 [2024-10-08 18:29:19.882974] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.750 [2024-10-08 18:29:19.882982] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.882988] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296c00) on tqpair=0x2236760 00:22:01.750 [2024-10-08 18:29:19.883004] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.750 [2024-10-08 18:29:19.883014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2236760) 00:22:01.751 [2024-10-08 18:29:19.883024] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.751 [2024-10-08 18:29:19.883045] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296c00, cid 5, qid 0 00:22:01.751 [2024-10-08 18:29:19.883120] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.751 [2024-10-08 18:29:19.883134] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.751 [2024-10-08 18:29:19.883142] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883148] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296c00) on tqpair=0x2236760 00:22:01.751 [2024-10-08 18:29:19.883174] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883186] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2236760) 00:22:01.751 [2024-10-08 18:29:19.883198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.751 [2024-10-08 18:29:19.883210] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883219] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2236760) 00:22:01.751 [2024-10-08 18:29:19.883228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.751 [2024-10-08 18:29:19.883240] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883248] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2236760) 00:22:01.751 [2024-10-08 18:29:19.883258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.751 [2024-10-08 18:29:19.883270] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883278] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2236760) 00:22:01.751 [2024-10-08 18:29:19.883324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.751 [2024-10-08 18:29:19.883352] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296c00, cid 5, qid 0 00:22:01.751 [2024-10-08 18:29:19.883364] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296a80, cid 4, qid 0 00:22:01.751 [2024-10-08 18:29:19.883376] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296d80, cid 6, qid 0 00:22:01.751 [2024-10-08 18:29:19.883384] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296f00, cid 7, qid 0 00:22:01.751 [2024-10-08 18:29:19.883555] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.751 [2024-10-08 18:29:19.883571] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.751 [2024-10-08 18:29:19.883579] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883585] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236760): datao=0, datal=8192, cccid=5 00:22:01.751 [2024-10-08 18:29:19.883593] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2296c00) on tqpair(0x2236760): expected_datao=0, payload_size=8192 00:22:01.751 [2024-10-08 18:29:19.883600] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883619] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883629] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883641] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.751 [2024-10-08 18:29:19.883653] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.751 [2024-10-08 18:29:19.883660] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883667] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236760): datao=0, datal=512, cccid=4 00:22:01.751 [2024-10-08 18:29:19.883674] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2296a80) on tqpair(0x2236760): expected_datao=0, payload_size=512 00:22:01.751 [2024-10-08 18:29:19.883681] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883691] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883698] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883707] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.751 [2024-10-08 18:29:19.883716] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.751 [2024-10-08 18:29:19.883723] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883730] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236760): datao=0, datal=512, cccid=6 00:22:01.751 [2024-10-08 18:29:19.883737] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2296d80) on tqpair(0x2236760): expected_datao=0, payload_size=512 00:22:01.751 [2024-10-08 18:29:19.883745] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883754] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883762] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883770] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:01.751 [2024-10-08 18:29:19.883780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:01.751 [2024-10-08 18:29:19.883787] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883793] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2236760): datao=0, datal=4096, cccid=7 00:22:01.751 [2024-10-08 18:29:19.883801] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2296f00) on tqpair(0x2236760): expected_datao=0, payload_size=4096 00:22:01.751 [2024-10-08 18:29:19.883808] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883817] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883825] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883837] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.751 [2024-10-08 18:29:19.883847] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.751 [2024-10-08 18:29:19.883854] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883861] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296c00) on tqpair=0x2236760 00:22:01.751 [2024-10-08 18:29:19.883885] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.751 [2024-10-08 18:29:19.883898] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.751 [2024-10-08 18:29:19.883906] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883913] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296a80) on tqpair=0x2236760 00:22:01.751 [2024-10-08 18:29:19.883931] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.751 [2024-10-08 18:29:19.883958] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.751 [2024-10-08 18:29:19.883965] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.883972] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296d80) on tqpair=0x2236760 00:22:01.751 [2024-10-08 18:29:19.883983] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.751 [2024-10-08 18:29:19.883993] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.751 [2024-10-08 18:29:19.884015] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.884022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296f00) on tqpair=0x2236760 00:22:01.751 ===================================================== 00:22:01.751 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.751 ===================================================== 00:22:01.751 Controller Capabilities/Features 00:22:01.751 ================================ 00:22:01.751 Vendor ID: 8086 00:22:01.751 Subsystem Vendor ID: 8086 00:22:01.751 Serial Number: SPDK00000000000001 00:22:01.751 Model Number: SPDK bdev Controller 00:22:01.751 Firmware Version: 25.01 00:22:01.751 Recommended Arb Burst: 6 00:22:01.751 IEEE OUI Identifier: e4 d2 5c 00:22:01.751 Multi-path I/O 00:22:01.751 May have multiple subsystem ports: Yes 00:22:01.751 May have multiple controllers: Yes 00:22:01.751 Associated with SR-IOV VF: No 00:22:01.751 Max Data Transfer Size: 131072 00:22:01.751 Max Number of Namespaces: 32 00:22:01.751 Max Number of I/O Queues: 127 00:22:01.751 NVMe Specification Version (VS): 1.3 00:22:01.751 NVMe Specification Version (Identify): 1.3 00:22:01.751 Maximum Queue Entries: 128 00:22:01.751 Contiguous Queues Required: Yes 00:22:01.751 Arbitration Mechanisms Supported 00:22:01.751 Weighted Round Robin: Not Supported 00:22:01.751 Vendor Specific: Not Supported 00:22:01.751 Reset Timeout: 15000 ms 00:22:01.751 Doorbell Stride: 4 bytes 00:22:01.751 NVM Subsystem Reset: Not Supported 00:22:01.751 Command Sets Supported 00:22:01.751 NVM Command Set: Supported 00:22:01.751 Boot Partition: Not Supported 00:22:01.751 Memory Page Size Minimum: 4096 bytes 00:22:01.751 Memory Page Size Maximum: 4096 bytes 00:22:01.751 Persistent Memory Region: Not Supported 00:22:01.751 Optional Asynchronous Events Supported 00:22:01.751 Namespace Attribute Notices: Supported 00:22:01.751 Firmware Activation Notices: Not Supported 00:22:01.751 ANA Change Notices: Not Supported 00:22:01.751 PLE Aggregate Log Change Notices: Not Supported 00:22:01.751 LBA Status Info Alert Notices: Not Supported 00:22:01.751 EGE Aggregate Log Change Notices: Not Supported 00:22:01.751 Normal NVM Subsystem Shutdown event: Not Supported 00:22:01.751 Zone Descriptor Change Notices: Not Supported 00:22:01.751 Discovery Log Change Notices: Not Supported 00:22:01.751 Controller Attributes 00:22:01.751 128-bit Host Identifier: Supported 00:22:01.751 Non-Operational Permissive Mode: Not Supported 00:22:01.751 NVM Sets: Not Supported 00:22:01.751 Read Recovery Levels: Not Supported 00:22:01.751 Endurance Groups: Not Supported 00:22:01.751 Predictable Latency Mode: Not Supported 00:22:01.751 Traffic Based Keep ALive: Not Supported 00:22:01.751 Namespace Granularity: Not Supported 00:22:01.751 SQ Associations: Not Supported 00:22:01.751 UUID List: Not Supported 00:22:01.751 Multi-Domain Subsystem: Not Supported 00:22:01.751 Fixed Capacity Management: Not Supported 00:22:01.751 Variable Capacity Management: Not Supported 00:22:01.751 Delete Endurance Group: Not Supported 00:22:01.751 Delete NVM Set: Not Supported 00:22:01.751 Extended LBA Formats Supported: Not Supported 00:22:01.751 Flexible Data Placement Supported: Not Supported 00:22:01.751 00:22:01.751 Controller Memory Buffer Support 00:22:01.751 ================================ 00:22:01.751 Supported: No 00:22:01.751 00:22:01.751 Persistent Memory Region Support 00:22:01.751 ================================ 00:22:01.751 Supported: No 00:22:01.751 00:22:01.751 Admin Command Set Attributes 00:22:01.751 ============================ 00:22:01.751 Security Send/Receive: Not Supported 00:22:01.751 Format NVM: Not Supported 00:22:01.751 Firmware Activate/Download: Not Supported 00:22:01.751 Namespace Management: Not Supported 00:22:01.751 Device Self-Test: Not Supported 00:22:01.751 Directives: Not Supported 00:22:01.751 NVMe-MI: Not Supported 00:22:01.751 Virtualization Management: Not Supported 00:22:01.751 Doorbell Buffer Config: Not Supported 00:22:01.751 Get LBA Status Capability: Not Supported 00:22:01.751 Command & Feature Lockdown Capability: Not Supported 00:22:01.751 Abort Command Limit: 4 00:22:01.751 Async Event Request Limit: 4 00:22:01.751 Number of Firmware Slots: N/A 00:22:01.751 Firmware Slot 1 Read-Only: N/A 00:22:01.751 Firmware Activation Without Reset: N/A 00:22:01.751 Multiple Update Detection Support: N/A 00:22:01.751 Firmware Update Granularity: No Information Provided 00:22:01.751 Per-Namespace SMART Log: No 00:22:01.751 Asymmetric Namespace Access Log Page: Not Supported 00:22:01.751 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:01.751 Command Effects Log Page: Supported 00:22:01.751 Get Log Page Extended Data: Supported 00:22:01.751 Telemetry Log Pages: Not Supported 00:22:01.751 Persistent Event Log Pages: Not Supported 00:22:01.751 Supported Log Pages Log Page: May Support 00:22:01.751 Commands Supported & Effects Log Page: Not Supported 00:22:01.751 Feature Identifiers & Effects Log Page:May Support 00:22:01.751 NVMe-MI Commands & Effects Log Page: May Support 00:22:01.751 Data Area 4 for Telemetry Log: Not Supported 00:22:01.751 Error Log Page Entries Supported: 128 00:22:01.751 Keep Alive: Supported 00:22:01.751 Keep Alive Granularity: 10000 ms 00:22:01.751 00:22:01.751 NVM Command Set Attributes 00:22:01.751 ========================== 00:22:01.751 Submission Queue Entry Size 00:22:01.751 Max: 64 00:22:01.751 Min: 64 00:22:01.751 Completion Queue Entry Size 00:22:01.751 Max: 16 00:22:01.751 Min: 16 00:22:01.751 Number of Namespaces: 32 00:22:01.751 Compare Command: Supported 00:22:01.751 Write Uncorrectable Command: Not Supported 00:22:01.751 Dataset Management Command: Supported 00:22:01.751 Write Zeroes Command: Supported 00:22:01.751 Set Features Save Field: Not Supported 00:22:01.751 Reservations: Supported 00:22:01.751 Timestamp: Not Supported 00:22:01.751 Copy: Supported 00:22:01.751 Volatile Write Cache: Present 00:22:01.751 Atomic Write Unit (Normal): 1 00:22:01.751 Atomic Write Unit (PFail): 1 00:22:01.751 Atomic Compare & Write Unit: 1 00:22:01.751 Fused Compare & Write: Supported 00:22:01.751 Scatter-Gather List 00:22:01.751 SGL Command Set: Supported 00:22:01.751 SGL Keyed: Supported 00:22:01.751 SGL Bit Bucket Descriptor: Not Supported 00:22:01.751 SGL Metadata Pointer: Not Supported 00:22:01.751 Oversized SGL: Not Supported 00:22:01.751 SGL Metadata Address: Not Supported 00:22:01.751 SGL Offset: Supported 00:22:01.751 Transport SGL Data Block: Not Supported 00:22:01.751 Replay Protected Memory Block: Not Supported 00:22:01.751 00:22:01.751 Firmware Slot Information 00:22:01.751 ========================= 00:22:01.751 Active slot: 1 00:22:01.751 Slot 1 Firmware Revision: 25.01 00:22:01.751 00:22:01.751 00:22:01.751 Commands Supported and Effects 00:22:01.751 ============================== 00:22:01.751 Admin Commands 00:22:01.751 -------------- 00:22:01.751 Get Log Page (02h): Supported 00:22:01.751 Identify (06h): Supported 00:22:01.751 Abort (08h): Supported 00:22:01.751 Set Features (09h): Supported 00:22:01.751 Get Features (0Ah): Supported 00:22:01.751 Asynchronous Event Request (0Ch): Supported 00:22:01.751 Keep Alive (18h): Supported 00:22:01.751 I/O Commands 00:22:01.751 ------------ 00:22:01.751 Flush (00h): Supported LBA-Change 00:22:01.751 Write (01h): Supported LBA-Change 00:22:01.751 Read (02h): Supported 00:22:01.751 Compare (05h): Supported 00:22:01.751 Write Zeroes (08h): Supported LBA-Change 00:22:01.751 Dataset Management (09h): Supported LBA-Change 00:22:01.751 Copy (19h): Supported LBA-Change 00:22:01.751 00:22:01.751 Error Log 00:22:01.751 ========= 00:22:01.751 00:22:01.751 Arbitration 00:22:01.751 =========== 00:22:01.751 Arbitration Burst: 1 00:22:01.751 00:22:01.751 Power Management 00:22:01.751 ================ 00:22:01.751 Number of Power States: 1 00:22:01.751 Current Power State: Power State #0 00:22:01.751 Power State #0: 00:22:01.751 Max Power: 0.00 W 00:22:01.751 Non-Operational State: Operational 00:22:01.751 Entry Latency: Not Reported 00:22:01.751 Exit Latency: Not Reported 00:22:01.751 Relative Read Throughput: 0 00:22:01.751 Relative Read Latency: 0 00:22:01.751 Relative Write Throughput: 0 00:22:01.751 Relative Write Latency: 0 00:22:01.751 Idle Power: Not Reported 00:22:01.751 Active Power: Not Reported 00:22:01.751 Non-Operational Permissive Mode: Not Supported 00:22:01.751 00:22:01.751 Health Information 00:22:01.751 ================== 00:22:01.751 Critical Warnings: 00:22:01.751 Available Spare Space: OK 00:22:01.751 Temperature: OK 00:22:01.751 Device Reliability: OK 00:22:01.751 Read Only: No 00:22:01.751 Volatile Memory Backup: OK 00:22:01.751 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:01.751 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:01.751 Available Spare: 0% 00:22:01.751 Available Spare Threshold: 0% 00:22:01.751 Life Percentage Used:[2024-10-08 18:29:19.884155] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.884167] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2236760) 00:22:01.751 [2024-10-08 18:29:19.884178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.751 [2024-10-08 18:29:19.884200] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296f00, cid 7, qid 0 00:22:01.751 [2024-10-08 18:29:19.884354] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.751 [2024-10-08 18:29:19.884370] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.751 [2024-10-08 18:29:19.884378] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.884385] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296f00) on tqpair=0x2236760 00:22:01.751 [2024-10-08 18:29:19.884428] nvme_ctrlr.c:4417:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:01.751 [2024-10-08 18:29:19.884448] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296480) on tqpair=0x2236760 00:22:01.751 [2024-10-08 18:29:19.884460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.751 [2024-10-08 18:29:19.884469] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296600) on tqpair=0x2236760 00:22:01.751 [2024-10-08 18:29:19.884477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.751 [2024-10-08 18:29:19.884485] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296780) on tqpair=0x2236760 00:22:01.751 [2024-10-08 18:29:19.884493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.751 [2024-10-08 18:29:19.884501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.751 [2024-10-08 18:29:19.884509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.751 [2024-10-08 18:29:19.884521] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.884529] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.751 [2024-10-08 18:29:19.884536] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.751 [2024-10-08 18:29:19.884547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.751 [2024-10-08 18:29:19.884570] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.751 [2024-10-08 18:29:19.884699] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.884718] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.884727] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.884734] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.884745] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.884753] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.884760] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.752 [2024-10-08 18:29:19.884771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.752 [2024-10-08 18:29:19.884797] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.752 [2024-10-08 18:29:19.884904] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.884917] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.884925] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.884932] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.884939] nvme_ctrlr.c:1167:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:01.752 [2024-10-08 18:29:19.884947] nvme_ctrlr.c:1170:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:01.752 [2024-10-08 18:29:19.884963] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.884972] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.884979] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.752 [2024-10-08 18:29:19.884989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.752 [2024-10-08 18:29:19.885009] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.752 [2024-10-08 18:29:19.885085] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.885097] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.885105] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885113] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.885129] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885139] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885146] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.752 [2024-10-08 18:29:19.885156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.752 [2024-10-08 18:29:19.885177] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.752 [2024-10-08 18:29:19.885254] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.885268] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.885276] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885283] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.885308] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885318] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885325] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.752 [2024-10-08 18:29:19.885336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.752 [2024-10-08 18:29:19.885362] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.752 [2024-10-08 18:29:19.885453] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.885466] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.885474] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885481] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.885497] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885507] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885513] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.752 [2024-10-08 18:29:19.885524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.752 [2024-10-08 18:29:19.885545] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.752 [2024-10-08 18:29:19.885669] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.885682] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.885689] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885696] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.885713] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885722] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885729] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.752 [2024-10-08 18:29:19.885740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.752 [2024-10-08 18:29:19.885760] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.752 [2024-10-08 18:29:19.885837] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.885852] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.885859] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885866] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.885882] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885892] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.885899] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.752 [2024-10-08 18:29:19.885910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.752 [2024-10-08 18:29:19.885931] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.752 [2024-10-08 18:29:19.886004] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.886019] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.886026] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.886033] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.886050] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.886059] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.886066] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.752 [2024-10-08 18:29:19.886077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.752 [2024-10-08 18:29:19.886098] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.752 [2024-10-08 18:29:19.886186] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.886200] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.886208] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.886215] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.886230] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.886240] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.886247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.752 [2024-10-08 18:29:19.886257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.752 [2024-10-08 18:29:19.886278] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.752 [2024-10-08 18:29:19.890305] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.890321] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.890328] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.890335] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.890353] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.890363] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.890370] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2236760) 00:22:01.752 [2024-10-08 18:29:19.890381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.752 [2024-10-08 18:29:19.890404] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2296900, cid 3, qid 0 00:22:01.752 [2024-10-08 18:29:19.890535] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:01.752 [2024-10-08 18:29:19.890548] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:01.752 [2024-10-08 18:29:19.890555] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:01.752 [2024-10-08 18:29:19.890562] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2296900) on tqpair=0x2236760 00:22:01.752 [2024-10-08 18:29:19.890575] nvme_ctrlr.c:1289:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:22:01.752 0% 00:22:01.752 Data Units Read: 0 00:22:01.752 Data Units Written: 0 00:22:01.752 Host Read Commands: 0 00:22:01.752 Host Write Commands: 0 00:22:01.752 Controller Busy Time: 0 minutes 00:22:01.752 Power Cycles: 0 00:22:01.752 Power On Hours: 0 hours 00:22:01.752 Unsafe Shutdowns: 0 00:22:01.752 Unrecoverable Media Errors: 0 00:22:01.752 Lifetime Error Log Entries: 0 00:22:01.752 Warning Temperature Time: 0 minutes 00:22:01.752 Critical Temperature Time: 0 minutes 00:22:01.752 00:22:01.752 Number of Queues 00:22:01.752 ================ 00:22:01.752 Number of I/O Submission Queues: 127 00:22:01.752 Number of I/O Completion Queues: 127 00:22:01.752 00:22:01.752 Active Namespaces 00:22:01.752 ================= 00:22:01.752 Namespace ID:1 00:22:01.752 Error Recovery Timeout: Unlimited 00:22:01.752 Command Set Identifier: NVM (00h) 00:22:01.752 Deallocate: Supported 00:22:01.752 Deallocated/Unwritten Error: Not Supported 00:22:01.752 Deallocated Read Value: Unknown 00:22:01.752 Deallocate in Write Zeroes: Not Supported 00:22:01.752 Deallocated Guard Field: 0xFFFF 00:22:01.752 Flush: Supported 00:22:01.752 Reservation: Supported 00:22:01.752 Namespace Sharing Capabilities: Multiple Controllers 00:22:01.752 Size (in LBAs): 131072 (0GiB) 00:22:01.752 Capacity (in LBAs): 131072 (0GiB) 00:22:01.752 Utilization (in LBAs): 131072 (0GiB) 00:22:01.752 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:01.752 EUI64: ABCDEF0123456789 00:22:01.752 UUID: 565bffd7-25ff-4a7c-aeda-c45447224032 00:22:01.752 Thin Provisioning: Not Supported 00:22:01.752 Per-NS Atomic Units: Yes 00:22:01.752 Atomic Boundary Size (Normal): 0 00:22:01.752 Atomic Boundary Size (PFail): 0 00:22:01.752 Atomic Boundary Offset: 0 00:22:01.752 Maximum Single Source Range Length: 65535 00:22:01.752 Maximum Copy Length: 65535 00:22:01.752 Maximum Source Range Count: 1 00:22:01.752 NGUID/EUI64 Never Reused: No 00:22:01.752 Namespace Write Protected: No 00:22:01.752 Number of LBA Formats: 1 00:22:01.752 Current LBA Format: LBA Format #00 00:22:01.752 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:01.752 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.752 rmmod nvme_tcp 00:22:01.752 rmmod nvme_fabrics 00:22:01.752 rmmod nvme_keyring 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1308426 ']' 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1308426 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1308426 ']' 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1308426 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:01.752 18:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1308426 00:22:01.752 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:01.752 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:01.752 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1308426' 00:22:01.752 killing process with pid 1308426 00:22:01.752 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1308426 00:22:01.752 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1308426 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.319 18:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.221 00:22:04.221 real 0m5.690s 00:22:04.221 user 0m4.529s 00:22:04.221 sys 0m2.018s 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:04.221 ************************************ 00:22:04.221 END TEST nvmf_identify 00:22:04.221 ************************************ 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.221 ************************************ 00:22:04.221 START TEST nvmf_perf 00:22:04.221 ************************************ 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:04.221 * Looking for test storage... 00:22:04.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:22:04.221 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:04.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.480 --rc genhtml_branch_coverage=1 00:22:04.480 --rc genhtml_function_coverage=1 00:22:04.480 --rc genhtml_legend=1 00:22:04.480 --rc geninfo_all_blocks=1 00:22:04.480 --rc geninfo_unexecuted_blocks=1 00:22:04.480 00:22:04.480 ' 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:04.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.480 --rc genhtml_branch_coverage=1 00:22:04.480 --rc genhtml_function_coverage=1 00:22:04.480 --rc genhtml_legend=1 00:22:04.480 --rc geninfo_all_blocks=1 00:22:04.480 --rc geninfo_unexecuted_blocks=1 00:22:04.480 00:22:04.480 ' 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:04.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.480 --rc genhtml_branch_coverage=1 00:22:04.480 --rc genhtml_function_coverage=1 00:22:04.480 --rc genhtml_legend=1 00:22:04.480 --rc geninfo_all_blocks=1 00:22:04.480 --rc geninfo_unexecuted_blocks=1 00:22:04.480 00:22:04.480 ' 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:04.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.480 --rc genhtml_branch_coverage=1 00:22:04.480 --rc genhtml_function_coverage=1 00:22:04.480 --rc genhtml_legend=1 00:22:04.480 --rc geninfo_all_blocks=1 00:22:04.480 --rc geninfo_unexecuted_blocks=1 00:22:04.480 00:22:04.480 ' 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.480 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:04.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:04.481 18:29:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.015 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:07.016 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:07.016 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:07.016 Found net devices under 0000:09:00.0: cvl_0_0 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:07.016 Found net devices under 0000:09:00.1: cvl_0_1 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:07.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:22:07.016 00:22:07.016 --- 10.0.0.2 ping statistics --- 00:22:07.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.016 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:22:07.016 00:22:07.016 --- 10.0.0.1 ping statistics --- 00:22:07.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.016 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1310517 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1310517 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1310517 ']' 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:07.016 18:29:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.016 [2024-10-08 18:29:24.938228] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:22:07.016 [2024-10-08 18:29:24.938336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.016 [2024-10-08 18:29:25.003617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.016 [2024-10-08 18:29:25.113265] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.016 [2024-10-08 18:29:25.113363] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.016 [2024-10-08 18:29:25.113378] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.016 [2024-10-08 18:29:25.113390] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.016 [2024-10-08 18:29:25.113400] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.016 [2024-10-08 18:29:25.115053] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.016 [2024-10-08 18:29:25.115086] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.016 [2024-10-08 18:29:25.115144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:07.016 [2024-10-08 18:29:25.115147] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.016 18:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:07.016 18:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:07.017 18:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:07.017 18:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.017 18:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.017 18:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.017 18:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:07.017 18:29:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:10.293 18:29:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:10.293 18:29:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:10.551 18:29:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:22:10.551 18:29:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:10.809 18:29:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:10.809 18:29:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:22:10.809 18:29:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:10.809 18:29:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:10.809 18:29:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:11.066 [2024-10-08 18:29:29.247094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.066 18:29:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.323 18:29:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:11.323 18:29:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:11.580 18:29:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:11.580 18:29:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:11.838 18:29:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.095 [2024-10-08 18:29:30.359146] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.095 18:29:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:12.660 18:29:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:22:12.660 18:29:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:12.660 18:29:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:12.660 18:29:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:13.610 Initializing NVMe Controllers 00:22:13.610 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:22:13.610 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:22:13.610 Initialization complete. Launching workers. 00:22:13.610 ======================================================== 00:22:13.610 Latency(us) 00:22:13.610 Device Information : IOPS MiB/s Average min max 00:22:13.610 PCIE (0000:0b:00.0) NSID 1 from core 0: 86375.41 337.40 369.85 38.93 5457.86 00:22:13.610 ======================================================== 00:22:13.610 Total : 86375.41 337.40 369.85 38.93 5457.86 00:22:13.610 00:22:13.927 18:29:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:15.302 Initializing NVMe Controllers 00:22:15.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:15.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:15.302 Initialization complete. Launching workers. 00:22:15.302 ======================================================== 00:22:15.302 Latency(us) 00:22:15.302 Device Information : IOPS MiB/s Average min max 00:22:15.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 99.94 0.39 10115.67 148.98 45865.33 00:22:15.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.96 0.26 15279.86 7002.04 50874.33 00:22:15.302 ======================================================== 00:22:15.302 Total : 165.91 0.65 12168.90 148.98 50874.33 00:22:15.302 00:22:15.302 18:29:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:16.675 Initializing NVMe Controllers 00:22:16.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:16.675 Initialization complete. Launching workers. 00:22:16.675 ======================================================== 00:22:16.675 Latency(us) 00:22:16.675 Device Information : IOPS MiB/s Average min max 00:22:16.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8335.99 32.56 3840.47 670.08 7721.36 00:22:16.675 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3850.00 15.04 8354.47 6798.00 16211.10 00:22:16.675 ======================================================== 00:22:16.675 Total : 12185.99 47.60 5266.61 670.08 16211.10 00:22:16.675 00:22:16.675 18:29:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:16.675 18:29:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:16.675 18:29:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:19.201 Initializing NVMe Controllers 00:22:19.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.202 Controller IO queue size 128, less than required. 00:22:19.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.202 Controller IO queue size 128, less than required. 00:22:19.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:19.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:19.202 Initialization complete. Launching workers. 00:22:19.202 ======================================================== 00:22:19.202 Latency(us) 00:22:19.202 Device Information : IOPS MiB/s Average min max 00:22:19.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1755.47 438.87 74246.36 52307.77 110540.89 00:22:19.202 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 572.83 143.21 231748.18 70497.20 325386.05 00:22:19.202 ======================================================== 00:22:19.202 Total : 2328.30 582.08 112996.27 52307.77 325386.05 00:22:19.202 00:22:19.202 18:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:19.202 No valid NVMe controllers or AIO or URING devices found 00:22:19.202 Initializing NVMe Controllers 00:22:19.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.202 Controller IO queue size 128, less than required. 00:22:19.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.202 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:19.202 Controller IO queue size 128, less than required. 00:22:19.202 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.202 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:19.202 WARNING: Some requested NVMe devices were skipped 00:22:19.202 18:29:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:21.729 Initializing NVMe Controllers 00:22:21.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:21.729 Controller IO queue size 128, less than required. 00:22:21.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.729 Controller IO queue size 128, less than required. 00:22:21.729 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:21.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:21.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:21.729 Initialization complete. Launching workers. 00:22:21.729 00:22:21.729 ==================== 00:22:21.729 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:21.729 TCP transport: 00:22:21.729 polls: 9587 00:22:21.729 idle_polls: 6398 00:22:21.729 sock_completions: 3189 00:22:21.729 nvme_completions: 5821 00:22:21.729 submitted_requests: 8716 00:22:21.729 queued_requests: 1 00:22:21.729 00:22:21.729 ==================== 00:22:21.729 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:21.729 TCP transport: 00:22:21.729 polls: 9802 00:22:21.729 idle_polls: 5731 00:22:21.729 sock_completions: 4071 00:22:21.729 nvme_completions: 6541 00:22:21.729 submitted_requests: 9660 00:22:21.729 queued_requests: 1 00:22:21.729 ======================================================== 00:22:21.729 Latency(us) 00:22:21.729 Device Information : IOPS MiB/s Average min max 00:22:21.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1454.67 363.67 90397.97 63609.37 149200.93 00:22:21.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1634.63 408.66 79056.63 36449.11 130420.04 00:22:21.729 ======================================================== 00:22:21.729 Total : 3089.31 772.33 84396.97 36449.11 149200.93 00:22:21.729 00:22:21.729 18:29:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:21.729 18:29:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.986 rmmod nvme_tcp 00:22:21.986 rmmod nvme_fabrics 00:22:21.986 rmmod nvme_keyring 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.986 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1310517 ']' 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1310517 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1310517 ']' 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1310517 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1310517 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1310517' 00:22:21.987 killing process with pid 1310517 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1310517 00:22:21.987 18:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1310517 00:22:23.916 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:23.916 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:23.916 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:23.916 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:23.916 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:22:23.917 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:23.917 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:22:23.917 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.917 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:23.917 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.917 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.917 18:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:25.822 00:22:25.822 real 0m21.420s 00:22:25.822 user 1m5.318s 00:22:25.822 sys 0m5.722s 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:25.822 ************************************ 00:22:25.822 END TEST nvmf_perf 00:22:25.822 ************************************ 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.822 ************************************ 00:22:25.822 START TEST nvmf_fio_host 00:22:25.822 ************************************ 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:25.822 * Looking for test storage... 00:22:25.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:22:25.822 18:29:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:25.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.822 --rc genhtml_branch_coverage=1 00:22:25.822 --rc genhtml_function_coverage=1 00:22:25.822 --rc genhtml_legend=1 00:22:25.822 --rc geninfo_all_blocks=1 00:22:25.822 --rc geninfo_unexecuted_blocks=1 00:22:25.822 00:22:25.822 ' 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:25.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.822 --rc genhtml_branch_coverage=1 00:22:25.822 --rc genhtml_function_coverage=1 00:22:25.822 --rc genhtml_legend=1 00:22:25.822 --rc geninfo_all_blocks=1 00:22:25.822 --rc geninfo_unexecuted_blocks=1 00:22:25.822 00:22:25.822 ' 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:25.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.822 --rc genhtml_branch_coverage=1 00:22:25.822 --rc genhtml_function_coverage=1 00:22:25.822 --rc genhtml_legend=1 00:22:25.822 --rc geninfo_all_blocks=1 00:22:25.822 --rc geninfo_unexecuted_blocks=1 00:22:25.822 00:22:25.822 ' 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:25.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.822 --rc genhtml_branch_coverage=1 00:22:25.822 --rc genhtml_function_coverage=1 00:22:25.822 --rc genhtml_legend=1 00:22:25.822 --rc geninfo_all_blocks=1 00:22:25.822 --rc geninfo_unexecuted_blocks=1 00:22:25.822 00:22:25.822 ' 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:25.822 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:25.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:25.823 18:29:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:28.357 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:28.357 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:28.357 Found net devices under 0000:09:00.0: cvl_0_0 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:28.357 Found net devices under 0000:09:00.1: cvl_0_1 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:22:28.357 00:22:28.357 --- 10.0.0.2 ping statistics --- 00:22:28.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.357 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:22:28.357 00:22:28.357 --- 10.0.0.1 ping statistics --- 00:22:28.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.357 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:22:28.357 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1314488 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1314488 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1314488 ']' 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.358 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.358 [2024-10-08 18:29:46.485268] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:22:28.358 [2024-10-08 18:29:46.485379] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.358 [2024-10-08 18:29:46.548085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.616 [2024-10-08 18:29:46.651406] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.616 [2024-10-08 18:29:46.651454] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.616 [2024-10-08 18:29:46.651467] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.616 [2024-10-08 18:29:46.651477] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.616 [2024-10-08 18:29:46.651486] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.616 [2024-10-08 18:29:46.653024] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.616 [2024-10-08 18:29:46.653133] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.616 [2024-10-08 18:29:46.653209] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.616 [2024-10-08 18:29:46.653212] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.616 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:28.616 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:22:28.616 18:29:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:28.873 [2024-10-08 18:29:47.044689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.873 18:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:28.873 18:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:28.873 18:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.873 18:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:29.130 Malloc1 00:22:29.130 18:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.388 18:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:29.645 18:29:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.902 [2024-10-08 18:29:48.163361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.902 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:30.160 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:30.160 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:30.160 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:30.160 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:30.160 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:30.160 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:30.160 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.160 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:30.160 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:30.160 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:30.417 18:29:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:30.417 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:30.417 fio-3.35 00:22:30.417 Starting 1 thread 00:22:32.944 00:22:32.944 test: (groupid=0, jobs=1): err= 0: pid=1314848: Tue Oct 8 18:29:50 2024 00:22:32.944 read: IOPS=8382, BW=32.7MiB/s (34.3MB/s)(65.7MiB/2007msec) 00:22:32.944 slat (nsec): min=1974, max=111953, avg=2603.04, stdev=1572.17 00:22:32.944 clat (usec): min=2551, max=15233, avg=8325.82, stdev=728.86 00:22:32.944 lat (usec): min=2576, max=15235, avg=8328.42, stdev=728.78 00:22:32.944 clat percentiles (usec): 00:22:32.944 | 1.00th=[ 6718], 5.00th=[ 7177], 10.00th=[ 7439], 20.00th=[ 7767], 00:22:32.944 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8455], 00:22:32.944 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9503], 00:22:32.944 | 99.00th=[ 9896], 99.50th=[10028], 99.90th=[13042], 99.95th=[13829], 00:22:32.944 | 99.99th=[15270] 00:22:32.944 bw ( KiB/s): min=32624, max=34048, per=99.88%, avg=33488.00, stdev=608.98, samples=4 00:22:32.944 iops : min= 8156, max= 8512, avg=8372.00, stdev=152.25, samples=4 00:22:32.944 write: IOPS=8375, BW=32.7MiB/s (34.3MB/s)(65.7MiB/2007msec); 0 zone resets 00:22:32.944 slat (usec): min=2, max=102, avg= 2.71, stdev= 1.40 00:22:32.944 clat (usec): min=1022, max=13562, avg=6881.92, stdev=574.24 00:22:32.944 lat (usec): min=1028, max=13564, avg=6884.63, stdev=574.20 00:22:32.944 clat percentiles (usec): 00:22:32.944 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6456], 00:22:32.944 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 6980], 00:22:32.944 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7767], 00:22:32.944 | 99.00th=[ 8094], 99.50th=[ 8225], 99.90th=[ 9634], 99.95th=[11863], 00:22:32.944 | 99.99th=[13566] 00:22:32.944 bw ( KiB/s): min=33344, max=33856, per=100.00%, avg=33524.00, stdev=229.04, samples=4 00:22:32.944 iops : min= 8336, max= 8464, avg=8381.00, stdev=57.26, samples=4 00:22:32.944 lat (msec) : 2=0.03%, 4=0.08%, 10=99.50%, 20=0.39% 00:22:32.944 cpu : usr=65.30%, sys=33.15%, ctx=109, majf=0, minf=32 00:22:32.944 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:32.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:32.944 issued rwts: total=16823,16810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.944 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:32.944 00:22:32.944 Run status group 0 (all jobs): 00:22:32.944 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=65.7MiB (68.9MB), run=2007-2007msec 00:22:32.944 WRITE: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=65.7MiB (68.9MB), run=2007-2007msec 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:32.944 18:29:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:32.944 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:32.944 fio-3.35 00:22:32.944 Starting 1 thread 00:22:35.471 00:22:35.471 test: (groupid=0, jobs=1): err= 0: pid=1315183: Tue Oct 8 18:29:53 2024 00:22:35.471 read: IOPS=8388, BW=131MiB/s (137MB/s)(263MiB/2010msec) 00:22:35.471 slat (nsec): min=2786, max=97512, avg=3720.15, stdev=1738.42 00:22:35.471 clat (usec): min=2095, max=16892, avg=8730.72, stdev=2116.59 00:22:35.471 lat (usec): min=2098, max=16895, avg=8734.44, stdev=2116.60 00:22:35.471 clat percentiles (usec): 00:22:35.471 | 1.00th=[ 4555], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6980], 00:22:35.471 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9110], 00:22:35.471 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11600], 95.00th=[12518], 00:22:35.471 | 99.00th=[14091], 99.50th=[14615], 99.90th=[15664], 99.95th=[16057], 00:22:35.471 | 99.99th=[16909] 00:22:35.471 bw ( KiB/s): min=62272, max=76160, per=51.47%, avg=69080.00, stdev=5672.66, samples=4 00:22:35.471 iops : min= 3892, max= 4760, avg=4317.50, stdev=354.54, samples=4 00:22:35.471 write: IOPS=4743, BW=74.1MiB/s (77.7MB/s)(141MiB/1898msec); 0 zone resets 00:22:35.471 slat (usec): min=30, max=137, avg=33.73, stdev= 5.44 00:22:35.471 clat (usec): min=6143, max=20328, avg=11548.03, stdev=2091.01 00:22:35.471 lat (usec): min=6174, max=20360, avg=11581.76, stdev=2091.18 00:22:35.471 clat percentiles (usec): 00:22:35.471 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9765], 00:22:35.471 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:22:35.471 | 70.00th=[12387], 80.00th=[13304], 90.00th=[14484], 95.00th=[15139], 00:22:35.471 | 99.00th=[17433], 99.50th=[18482], 99.90th=[19792], 99.95th=[20055], 00:22:35.471 | 99.99th=[20317] 00:22:35.472 bw ( KiB/s): min=65536, max=77344, per=94.09%, avg=71408.00, stdev=4941.18, samples=4 00:22:35.472 iops : min= 4096, max= 4834, avg=4463.00, stdev=308.82, samples=4 00:22:35.472 lat (msec) : 4=0.24%, 10=56.01%, 20=43.72%, 50=0.03% 00:22:35.472 cpu : usr=77.85%, sys=20.91%, ctx=50, majf=0, minf=46 00:22:35.472 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:35.472 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.472 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:35.472 issued rwts: total=16861,9003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.472 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:35.472 00:22:35.472 Run status group 0 (all jobs): 00:22:35.472 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (276MB), run=2010-2010msec 00:22:35.472 WRITE: bw=74.1MiB/s (77.7MB/s), 74.1MiB/s-74.1MiB/s (77.7MB/s-77.7MB/s), io=141MiB (148MB), run=1898-1898msec 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.472 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.472 rmmod nvme_tcp 00:22:35.472 rmmod nvme_fabrics 00:22:35.472 rmmod nvme_keyring 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1314488 ']' 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1314488 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1314488 ']' 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1314488 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1314488 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1314488' 00:22:35.730 killing process with pid 1314488 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1314488 00:22:35.730 18:29:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1314488 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.989 18:29:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.892 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.892 00:22:37.892 real 0m12.239s 00:22:37.892 user 0m35.357s 00:22:37.892 sys 0m4.063s 00:22:37.892 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:37.892 18:29:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.892 ************************************ 00:22:37.892 END TEST nvmf_fio_host 00:22:37.892 ************************************ 00:22:37.892 18:29:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:37.892 18:29:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:37.892 18:29:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:37.892 18:29:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.151 ************************************ 00:22:38.151 START TEST nvmf_failover 00:22:38.151 ************************************ 00:22:38.151 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:38.151 * Looking for test storage... 00:22:38.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:38.151 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:38.151 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:22:38.151 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:38.151 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:38.151 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.151 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.151 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.151 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.151 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.152 --rc genhtml_branch_coverage=1 00:22:38.152 --rc genhtml_function_coverage=1 00:22:38.152 --rc genhtml_legend=1 00:22:38.152 --rc geninfo_all_blocks=1 00:22:38.152 --rc geninfo_unexecuted_blocks=1 00:22:38.152 00:22:38.152 ' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.152 --rc genhtml_branch_coverage=1 00:22:38.152 --rc genhtml_function_coverage=1 00:22:38.152 --rc genhtml_legend=1 00:22:38.152 --rc geninfo_all_blocks=1 00:22:38.152 --rc geninfo_unexecuted_blocks=1 00:22:38.152 00:22:38.152 ' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.152 --rc genhtml_branch_coverage=1 00:22:38.152 --rc genhtml_function_coverage=1 00:22:38.152 --rc genhtml_legend=1 00:22:38.152 --rc geninfo_all_blocks=1 00:22:38.152 --rc geninfo_unexecuted_blocks=1 00:22:38.152 00:22:38.152 ' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:38.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.152 --rc genhtml_branch_coverage=1 00:22:38.152 --rc genhtml_function_coverage=1 00:22:38.152 --rc genhtml_legend=1 00:22:38.152 --rc geninfo_all_blocks=1 00:22:38.152 --rc geninfo_unexecuted_blocks=1 00:22:38.152 00:22:38.152 ' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:38.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:38.152 18:29:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:40.685 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:40.685 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:40.685 Found net devices under 0000:09:00.0: cvl_0_0 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:40.685 Found net devices under 0000:09:00.1: cvl_0_1 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:22:40.685 00:22:40.685 --- 10.0.0.2 ping statistics --- 00:22:40.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.685 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:22:40.685 00:22:40.685 --- 10.0.0.1 ping statistics --- 00:22:40.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.685 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:40.685 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1317497 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1317497 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1317497 ']' 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.686 18:29:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.686 [2024-10-08 18:29:58.735211] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:22:40.686 [2024-10-08 18:29:58.735306] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.686 [2024-10-08 18:29:58.798427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:40.686 [2024-10-08 18:29:58.906974] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.686 [2024-10-08 18:29:58.907026] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.686 [2024-10-08 18:29:58.907055] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.686 [2024-10-08 18:29:58.907067] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.686 [2024-10-08 18:29:58.907076] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.686 [2024-10-08 18:29:58.907933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.686 [2024-10-08 18:29:58.907991] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.686 [2024-10-08 18:29:58.907995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.944 18:29:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.944 18:29:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:40.944 18:29:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:40.944 18:29:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.944 18:29:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.944 18:29:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.944 18:29:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:41.202 [2024-10-08 18:29:59.302824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.202 18:29:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:41.460 Malloc0 00:22:41.460 18:29:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.717 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:42.283 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.570 [2024-10-08 18:30:00.572560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.570 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:42.570 [2024-10-08 18:30:00.853473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:42.828 18:30:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:43.086 [2024-10-08 18:30:01.138430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:43.086 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1317870 00:22:43.086 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:43.086 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.086 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1317870 /var/tmp/bdevperf.sock 00:22:43.086 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1317870 ']' 00:22:43.086 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.086 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:43.086 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.086 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:43.086 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:43.345 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:43.345 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:43.345 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:43.602 NVMe0n1 00:22:43.602 18:30:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:44.168 00:22:44.168 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1318033 00:22:44.168 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:44.168 18:30:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:45.102 18:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:45.360 18:30:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:48.641 18:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:48.899 00:22:48.899 18:30:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:49.157 [2024-10-08 18:30:07.269950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 [2024-10-08 18:30:07.270439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152aef0 is same with the state(6) to be set 00:22:49.157 18:30:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:52.439 18:30:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.439 [2024-10-08 18:30:10.547992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.439 18:30:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:53.420 18:30:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:53.678 18:30:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1318033 00:23:00.237 { 00:23:00.237 "results": [ 00:23:00.237 { 00:23:00.237 "job": "NVMe0n1", 00:23:00.237 "core_mask": "0x1", 00:23:00.237 "workload": "verify", 00:23:00.237 "status": "finished", 00:23:00.237 "verify_range": { 00:23:00.237 "start": 0, 00:23:00.237 "length": 16384 00:23:00.237 }, 00:23:00.237 "queue_depth": 128, 00:23:00.237 "io_size": 4096, 00:23:00.237 "runtime": 15.003874, 00:23:00.237 "iops": 8200.815336092532, 00:23:00.237 "mibps": 32.03443490661145, 00:23:00.237 "io_failed": 12428, 00:23:00.237 "io_timeout": 0, 00:23:00.237 "avg_latency_us": 14148.866664042098, 00:23:00.237 "min_latency_us": 540.0651851851852, 00:23:00.237 "max_latency_us": 17670.447407407406 00:23:00.237 } 00:23:00.237 ], 00:23:00.237 "core_count": 1 00:23:00.237 } 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1317870 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1317870 ']' 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1317870 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1317870 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1317870' 00:23:00.237 killing process with pid 1317870 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1317870 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1317870 00:23:00.237 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:00.237 [2024-10-08 18:30:01.205826] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:23:00.237 [2024-10-08 18:30:01.205917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1317870 ] 00:23:00.237 [2024-10-08 18:30:01.269104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.237 [2024-10-08 18:30:01.384240] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.237 Running I/O for 15 seconds... 00:23:00.237 8389.00 IOPS, 32.77 MiB/s [2024-10-08T16:30:18.524Z] [2024-10-08 18:30:03.486325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.237 [2024-10-08 18:30:03.486386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.237 [2024-10-08 18:30:03.486406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.237 [2024-10-08 18:30:03.486421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.237 [2024-10-08 18:30:03.486436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.237 [2024-10-08 18:30:03.486449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.237 [2024-10-08 18:30:03.486464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.237 [2024-10-08 18:30:03.486478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.237 [2024-10-08 18:30:03.486491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e2620 is same with the state(6) to be set 00:23:00.237 [2024-10-08 18:30:03.488229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.237 [2024-10-08 18:30:03.488255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.488602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.488632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.238 [2024-10-08 18:30:03.488851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.488879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.488912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.488941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.488969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.488984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.488997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.489971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.489987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.238 [2024-10-08 18:30:03.490637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.238 [2024-10-08 18:30:03.490651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.490984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.490999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:03.491768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.491817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.491830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.491860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.491871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.491884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.491907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.491918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.491930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.491954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.491964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.491977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.491990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.239 [2024-10-08 18:30:03.492494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.239 [2024-10-08 18:30:03.492505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:23:00.239 [2024-10-08 18:30:03.492517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:03.492579] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1102f00 was disconnected and freed. reset controller. 00:23:00.239 [2024-10-08 18:30:03.492597] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:00.239 [2024-10-08 18:30:03.492613] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.239 [2024-10-08 18:30:03.495889] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.239 [2024-10-08 18:30:03.495926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e2620 (9): Bad file descriptor 00:23:00.239 [2024-10-08 18:30:03.661707] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:00.239 7685.00 IOPS, 30.02 MiB/s [2024-10-08T16:30:18.526Z] 7903.67 IOPS, 30.87 MiB/s [2024-10-08T16:30:18.526Z] 8023.50 IOPS, 31.34 MiB/s [2024-10-08T16:30:18.526Z] [2024-10-08 18:30:07.270357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.239 [2024-10-08 18:30:07.270399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:07.270417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.239 [2024-10-08 18:30:07.270433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:07.270448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.239 [2024-10-08 18:30:07.270462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:07.270476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.239 [2024-10-08 18:30:07.270490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:07.270503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10e2620 is same with the state(6) to be set 00:23:00.239 [2024-10-08 18:30:07.272769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.239 [2024-10-08 18:30:07.272796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:07.272843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:07.272861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:07.272878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:07.272893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:07.272908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:07.272922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:07.272938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:07.272952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:07.272968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.239 [2024-10-08 18:30:07.272982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.239 [2024-10-08 18:30:07.272997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.273974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.273990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.274004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.240 [2024-10-08 18:30:07.274790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.274820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.274849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.274879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.274908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.274937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.240 [2024-10-08 18:30:07.274966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.240 [2024-10-08 18:30:07.274981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.274995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.241 [2024-10-08 18:30:07.275839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.275886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110392 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.275899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.275928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.275939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110400 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.275952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.275965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.275975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.275990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110408 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110416 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110424 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110432 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110440 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110448 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110456 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110464 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110472 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110480 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110488 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110496 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110504 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110512 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110520 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110528 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110536 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110544 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110552 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110560 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.276962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110568 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.276975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.276988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.276998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.277009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110576 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.277022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.277035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.277047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.277058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110584 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.277071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.277084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.277095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.277111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110592 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.277124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.277137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.277149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.277162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110600 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.277175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.277188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.277199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.277210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110608 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.277223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.277237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.277248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.277259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110616 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.277271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.277285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.241 [2024-10-08 18:30:07.277305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.241 [2024-10-08 18:30:07.277318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110624 len:8 PRP1 0x0 PRP2 0x0 00:23:00.241 [2024-10-08 18:30:07.277330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.241 [2024-10-08 18:30:07.277387] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1104fd0 was disconnected and freed. reset controller. 00:23:00.241 [2024-10-08 18:30:07.277406] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:00.241 [2024-10-08 18:30:07.277421] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.241 [2024-10-08 18:30:07.280676] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.241 [2024-10-08 18:30:07.280714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e2620 (9): Bad file descriptor 00:23:00.241 7986.80 IOPS, 31.20 MiB/s [2024-10-08T16:30:18.528Z] [2024-10-08 18:30:07.405248] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:00.241 8009.83 IOPS, 31.29 MiB/s [2024-10-08T16:30:18.528Z] 8074.29 IOPS, 31.54 MiB/s [2024-10-08T16:30:18.528Z] 8113.75 IOPS, 31.69 MiB/s [2024-10-08T16:30:18.528Z] 8157.33 IOPS, 31.86 MiB/s [2024-10-08T16:30:18.528Z] [2024-10-08 18:30:11.844782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.844843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.844870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.844888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.844905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.844931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.844948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.844963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.844979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.844994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.845024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.845054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.845084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.845114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.845143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.845173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.845203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.845233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.845262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.242 [2024-10-08 18:30:11.845533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.845982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.845997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.846972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.846988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.847002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.847018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.847033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.847049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.847065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.847081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.847096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.847112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.847126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.847142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.847157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.242 [2024-10-08 18:30:11.847172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.242 [2024-10-08 18:30:11.847187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.847981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.847996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.243 [2024-10-08 18:30:11.848102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.243 [2024-10-08 18:30:11.848136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.243 [2024-10-08 18:30:11.848167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.243 [2024-10-08 18:30:11.848196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.243 [2024-10-08 18:30:11.848225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.243 [2024-10-08 18:30:11.848254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.243 [2024-10-08 18:30:11.848284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.243 [2024-10-08 18:30:11.848323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.243 [2024-10-08 18:30:11.848353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.243 [2024-10-08 18:30:11.848383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.243 [2024-10-08 18:30:11.848841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1104c90 is same with the state(6) to be set 00:23:00.243 [2024-10-08 18:30:11.848871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.243 [2024-10-08 18:30:11.848883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.243 [2024-10-08 18:30:11.848898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56544 len:8 PRP1 0x0 PRP2 0x0 00:23:00.243 [2024-10-08 18:30:11.848911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.848983] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1104c90 was disconnected and freed. reset controller. 00:23:00.243 [2024-10-08 18:30:11.849001] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:00.243 [2024-10-08 18:30:11.849035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.243 [2024-10-08 18:30:11.849054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.849069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.243 [2024-10-08 18:30:11.849082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.849096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.243 [2024-10-08 18:30:11.849109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.849122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.243 [2024-10-08 18:30:11.849136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.243 [2024-10-08 18:30:11.849149] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.243 [2024-10-08 18:30:11.849202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e2620 (9): Bad file descriptor 00:23:00.243 [2024-10-08 18:30:11.852632] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.243 [2024-10-08 18:30:11.882749] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:00.243 8146.10 IOPS, 31.82 MiB/s [2024-10-08T16:30:18.530Z] 8164.09 IOPS, 31.89 MiB/s [2024-10-08T16:30:18.530Z] 8166.83 IOPS, 31.90 MiB/s [2024-10-08T16:30:18.530Z] 8171.69 IOPS, 31.92 MiB/s [2024-10-08T16:30:18.530Z] 8184.21 IOPS, 31.97 MiB/s 00:23:00.243 Latency(us) 00:23:00.243 [2024-10-08T16:30:18.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.243 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:00.243 Verification LBA range: start 0x0 length 0x4000 00:23:00.243 NVMe0n1 : 15.00 8200.82 32.03 828.32 0.00 14148.87 540.07 17670.45 00:23:00.243 [2024-10-08T16:30:18.530Z] =================================================================================================================== 00:23:00.243 [2024-10-08T16:30:18.530Z] Total : 8200.82 32.03 828.32 0.00 14148.87 540.07 17670.45 00:23:00.243 Received shutdown signal, test time was about 15.000000 seconds 00:23:00.243 00:23:00.243 Latency(us) 00:23:00.243 [2024-10-08T16:30:18.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.243 [2024-10-08T16:30:18.530Z] =================================================================================================================== 00:23:00.243 [2024-10-08T16:30:18.530Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1320391 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1320391 /var/tmp/bdevperf.sock 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1320391 ']' 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:00.243 18:30:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:00.243 18:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:00.243 18:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:00.243 18:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:00.243 [2024-10-08 18:30:18.336350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:00.243 18:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:00.501 [2024-10-08 18:30:18.645328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:00.501 18:30:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:00.759 NVMe0n1 00:23:00.759 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:01.324 00:23:01.324 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:01.582 00:23:01.582 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:01.582 18:30:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:01.840 18:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:02.098 18:30:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:05.432 18:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:05.432 18:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:05.432 18:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1321075 00:23:05.432 18:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:05.432 18:30:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1321075 00:23:06.806 { 00:23:06.806 "results": [ 00:23:06.806 { 00:23:06.806 "job": "NVMe0n1", 00:23:06.806 "core_mask": "0x1", 00:23:06.806 "workload": "verify", 00:23:06.806 "status": "finished", 00:23:06.806 "verify_range": { 00:23:06.806 "start": 0, 00:23:06.806 "length": 16384 00:23:06.806 }, 00:23:06.806 "queue_depth": 128, 00:23:06.806 "io_size": 4096, 00:23:06.806 "runtime": 1.012737, 00:23:06.806 "iops": 8028.737964545583, 00:23:06.806 "mibps": 31.362257674006184, 00:23:06.806 "io_failed": 0, 00:23:06.806 "io_timeout": 0, 00:23:06.806 "avg_latency_us": 15875.803227337532, 00:23:06.806 "min_latency_us": 1577.7185185185185, 00:23:06.806 "max_latency_us": 13010.10962962963 00:23:06.806 } 00:23:06.806 ], 00:23:06.806 "core_count": 1 00:23:06.806 } 00:23:06.806 18:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:06.806 [2024-10-08 18:30:17.751780] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:23:06.806 [2024-10-08 18:30:17.751865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1320391 ] 00:23:06.806 [2024-10-08 18:30:17.809936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.806 [2024-10-08 18:30:17.917249] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.806 [2024-10-08 18:30:20.339719] bdev_nvme.c:2025:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:06.806 [2024-10-08 18:30:20.339820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.807 [2024-10-08 18:30:20.339858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.807 [2024-10-08 18:30:20.339878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.807 [2024-10-08 18:30:20.339892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.807 [2024-10-08 18:30:20.339907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.807 [2024-10-08 18:30:20.339920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.807 [2024-10-08 18:30:20.339935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.807 [2024-10-08 18:30:20.339949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.807 [2024-10-08 18:30:20.339963] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:06.807 [2024-10-08 18:30:20.340013] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:06.807 [2024-10-08 18:30:20.340046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1454620 (9): Bad file descriptor 00:23:06.807 [2024-10-08 18:30:20.351359] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:06.807 Running I/O for 1 seconds... 00:23:06.807 8002.00 IOPS, 31.26 MiB/s 00:23:06.807 Latency(us) 00:23:06.807 [2024-10-08T16:30:25.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.807 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:06.807 Verification LBA range: start 0x0 length 0x4000 00:23:06.807 NVMe0n1 : 1.01 8028.74 31.36 0.00 0.00 15875.80 1577.72 13010.11 00:23:06.807 [2024-10-08T16:30:25.094Z] =================================================================================================================== 00:23:06.807 [2024-10-08T16:30:25.094Z] Total : 8028.74 31.36 0.00 0.00 15875.80 1577.72 13010.11 00:23:06.807 18:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.807 18:30:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:07.064 18:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.322 18:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:07.322 18:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:07.580 18:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.837 18:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1320391 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1320391 ']' 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1320391 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1320391 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1320391' 00:23:11.117 killing process with pid 1320391 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1320391 00:23:11.117 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1320391 00:23:11.375 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:11.375 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.633 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:11.633 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:11.633 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:11.633 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:11.633 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:11.633 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:11.633 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:11.633 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:11.633 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:11.633 rmmod nvme_tcp 00:23:11.633 rmmod nvme_fabrics 00:23:11.633 rmmod nvme_keyring 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1317497 ']' 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1317497 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1317497 ']' 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1317497 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1317497 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1317497' 00:23:11.891 killing process with pid 1317497 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1317497 00:23:11.891 18:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1317497 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.150 18:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.055 18:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.055 00:23:14.055 real 0m36.151s 00:23:14.055 user 2m7.583s 00:23:14.055 sys 0m5.924s 00:23:14.055 18:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:14.055 18:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:14.055 ************************************ 00:23:14.055 END TEST nvmf_failover 00:23:14.055 ************************************ 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.314 ************************************ 00:23:14.314 START TEST nvmf_host_discovery 00:23:14.314 ************************************ 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:14.314 * Looking for test storage... 00:23:14.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:14.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.314 --rc genhtml_branch_coverage=1 00:23:14.314 --rc genhtml_function_coverage=1 00:23:14.314 --rc genhtml_legend=1 00:23:14.314 --rc geninfo_all_blocks=1 00:23:14.314 --rc geninfo_unexecuted_blocks=1 00:23:14.314 00:23:14.314 ' 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:14.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.314 --rc genhtml_branch_coverage=1 00:23:14.314 --rc genhtml_function_coverage=1 00:23:14.314 --rc genhtml_legend=1 00:23:14.314 --rc geninfo_all_blocks=1 00:23:14.314 --rc geninfo_unexecuted_blocks=1 00:23:14.314 00:23:14.314 ' 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:14.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.314 --rc genhtml_branch_coverage=1 00:23:14.314 --rc genhtml_function_coverage=1 00:23:14.314 --rc genhtml_legend=1 00:23:14.314 --rc geninfo_all_blocks=1 00:23:14.314 --rc geninfo_unexecuted_blocks=1 00:23:14.314 00:23:14.314 ' 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:14.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.314 --rc genhtml_branch_coverage=1 00:23:14.314 --rc genhtml_function_coverage=1 00:23:14.314 --rc genhtml_legend=1 00:23:14.314 --rc geninfo_all_blocks=1 00:23:14.314 --rc geninfo_unexecuted_blocks=1 00:23:14.314 00:23:14.314 ' 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.314 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.315 18:30:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:16.848 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:16.848 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:16.848 Found net devices under 0000:09:00.0: cvl_0_0 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.848 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:16.849 Found net devices under 0000:09:00.1: cvl_0_1 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:16.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:23:16.849 00:23:16.849 --- 10.0.0.2 ping statistics --- 00:23:16.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.849 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:23:16.849 00:23:16.849 --- 10.0.0.1 ping statistics --- 00:23:16.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.849 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1323813 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1323813 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1323813 ']' 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.849 18:30:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.849 [2024-10-08 18:30:34.963376] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:23:16.849 [2024-10-08 18:30:34.963468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.849 [2024-10-08 18:30:35.027090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.849 [2024-10-08 18:30:35.133638] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.849 [2024-10-08 18:30:35.133687] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.849 [2024-10-08 18:30:35.133701] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.849 [2024-10-08 18:30:35.133712] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.849 [2024-10-08 18:30:35.133722] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.849 [2024-10-08 18:30:35.134315] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.108 [2024-10-08 18:30:35.286251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.108 [2024-10-08 18:30:35.294519] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.108 null0 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.108 null1 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1323832 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1323832 /tmp/host.sock 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1323832 ']' 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:17.108 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.108 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.108 [2024-10-08 18:30:35.372509] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:23:17.108 [2024-10-08 18:30:35.372598] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323832 ] 00:23:17.366 [2024-10-08 18:30:35.429715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.366 [2024-10-08 18:30:35.537995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.366 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.366 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.625 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.884 [2024-10-08 18:30:35.932118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.884 18:30:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:23:17.884 18:30:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:18.451 [2024-10-08 18:30:36.725411] bdev_nvme.c:7256:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:18.451 [2024-10-08 18:30:36.725455] bdev_nvme.c:7342:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:18.451 [2024-10-08 18:30:36.725480] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:18.709 [2024-10-08 18:30:36.812768] bdev_nvme.c:7185:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:18.709 [2024-10-08 18:30:36.915369] bdev_nvme.c:7075:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:18.709 [2024-10-08 18:30:36.915393] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:18.967 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.968 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:19.226 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.227 [2024-10-08 18:30:37.384254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:19.227 [2024-10-08 18:30:37.384724] bdev_nvme.c:7238:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:19.227 [2024-10-08 18:30:37.384761] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.227 [2024-10-08 18:30:37.470985] bdev_nvme.c:7180:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:19.227 18:30:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:19.485 [2024-10-08 18:30:37.529800] bdev_nvme.c:7075:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:19.485 [2024-10-08 18:30:37.529821] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:19.485 [2024-10-08 18:30:37.529830] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.421 [2024-10-08 18:30:38.596009] bdev_nvme.c:7238:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:20.421 [2024-10-08 18:30:38.596039] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:20.421 [2024-10-08 18:30:38.597316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.421 [2024-10-08 18:30:38.597349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.421 [2024-10-08 18:30:38.597370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.421 [2024-10-08 18:30:38.597385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.421 [2024-10-08 18:30:38.597398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.421 [2024-10-08 18:30:38.597412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.421 [2024-10-08 18:30:38.597426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.421 [2024-10-08 18:30:38.597438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.421 [2024-10-08 18:30:38.597452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:20.421 [2024-10-08 18:30:38.607316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.421 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.421 [2024-10-08 18:30:38.617357] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.421 [2024-10-08 18:30:38.617502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.422 [2024-10-08 18:30:38.617531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.422 [2024-10-08 18:30:38.617548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.422 [2024-10-08 18:30:38.617571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.422 [2024-10-08 18:30:38.617606] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.422 [2024-10-08 18:30:38.617624] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.422 [2024-10-08 18:30:38.617638] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.422 [2024-10-08 18:30:38.617658] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.422 [2024-10-08 18:30:38.627436] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.422 [2024-10-08 18:30:38.627594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.422 [2024-10-08 18:30:38.627622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.422 [2024-10-08 18:30:38.627638] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.422 [2024-10-08 18:30:38.627660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.422 [2024-10-08 18:30:38.627681] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.422 [2024-10-08 18:30:38.627695] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.422 [2024-10-08 18:30:38.627709] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.422 [2024-10-08 18:30:38.627728] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.422 [2024-10-08 18:30:38.637523] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.422 [2024-10-08 18:30:38.637673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.422 [2024-10-08 18:30:38.637700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.422 [2024-10-08 18:30:38.637716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.422 [2024-10-08 18:30:38.637738] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.422 [2024-10-08 18:30:38.637770] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.422 [2024-10-08 18:30:38.637788] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.422 [2024-10-08 18:30:38.637801] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.422 [2024-10-08 18:30:38.637820] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:20.422 [2024-10-08 18:30:38.647601] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.422 [2024-10-08 18:30:38.647818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.422 [2024-10-08 18:30:38.647848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.422 [2024-10-08 18:30:38.647864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.422 [2024-10-08 18:30:38.647887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.422 [2024-10-08 18:30:38.647908] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.422 [2024-10-08 18:30:38.647922] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.422 [2024-10-08 18:30:38.647935] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.422 [2024-10-08 18:30:38.647955] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.422 [2024-10-08 18:30:38.657690] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.422 [2024-10-08 18:30:38.657834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.422 [2024-10-08 18:30:38.657863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.422 [2024-10-08 18:30:38.657879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.422 [2024-10-08 18:30:38.657901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.422 [2024-10-08 18:30:38.657933] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.422 [2024-10-08 18:30:38.657951] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.422 [2024-10-08 18:30:38.657964] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.422 [2024-10-08 18:30:38.657983] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.422 [2024-10-08 18:30:38.667764] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.422 [2024-10-08 18:30:38.667933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.422 [2024-10-08 18:30:38.667967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.422 [2024-10-08 18:30:38.667983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.422 [2024-10-08 18:30:38.668005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.422 [2024-10-08 18:30:38.668025] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.422 [2024-10-08 18:30:38.668039] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.422 [2024-10-08 18:30:38.668053] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.422 [2024-10-08 18:30:38.668072] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.422 [2024-10-08 18:30:38.677851] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.422 [2024-10-08 18:30:38.677973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.422 [2024-10-08 18:30:38.678015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.422 [2024-10-08 18:30:38.678031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.422 [2024-10-08 18:30:38.678051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.422 [2024-10-08 18:30:38.678083] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.422 [2024-10-08 18:30:38.678100] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.422 [2024-10-08 18:30:38.678113] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.422 [2024-10-08 18:30:38.678132] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.422 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:20.422 [2024-10-08 18:30:38.687919] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.422 [2024-10-08 18:30:38.688149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.422 [2024-10-08 18:30:38.688178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.422 [2024-10-08 18:30:38.688195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.422 [2024-10-08 18:30:38.688217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.422 [2024-10-08 18:30:38.688237] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.422 [2024-10-08 18:30:38.688252] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.422 [2024-10-08 18:30:38.688265] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.423 [2024-10-08 18:30:38.688285] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.423 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.423 [2024-10-08 18:30:38.697997] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.423 [2024-10-08 18:30:38.698180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.423 [2024-10-08 18:30:38.698207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.423 [2024-10-08 18:30:38.698223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.423 [2024-10-08 18:30:38.698244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.423 [2024-10-08 18:30:38.698276] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.423 [2024-10-08 18:30:38.698304] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.423 [2024-10-08 18:30:38.698319] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.423 [2024-10-08 18:30:38.698338] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.682 [2024-10-08 18:30:38.708068] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.682 [2024-10-08 18:30:38.708302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.682 [2024-10-08 18:30:38.708333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.682 [2024-10-08 18:30:38.708350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.682 [2024-10-08 18:30:38.708372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.682 [2024-10-08 18:30:38.708393] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.682 [2024-10-08 18:30:38.708406] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.682 [2024-10-08 18:30:38.708420] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.682 [2024-10-08 18:30:38.708439] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.682 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:23:20.682 18:30:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:20.682 [2024-10-08 18:30:38.718143] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:20.682 [2024-10-08 18:30:38.718361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.682 [2024-10-08 18:30:38.718392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206e610 with addr=10.0.0.2, port=4420 00:23:20.682 [2024-10-08 18:30:38.718415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206e610 is same with the state(6) to be set 00:23:20.682 [2024-10-08 18:30:38.718438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206e610 (9): Bad file descriptor 00:23:20.682 [2024-10-08 18:30:38.718473] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:20.682 [2024-10-08 18:30:38.718491] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:20.682 [2024-10-08 18:30:38.718505] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:20.682 [2024-10-08 18:30:38.718524] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.682 [2024-10-08 18:30:38.721847] bdev_nvme.c:7043:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:20.682 [2024-10-08 18:30:38.721872] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:21.616 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.874 18:30:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.808 [2024-10-08 18:30:41.005978] bdev_nvme.c:7256:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:22.808 [2024-10-08 18:30:41.006008] bdev_nvme.c:7342:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:22.808 [2024-10-08 18:30:41.006030] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.066 [2024-10-08 18:30:41.134466] bdev_nvme.c:7185:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:23.066 [2024-10-08 18:30:41.323399] bdev_nvme.c:7075:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:23.066 [2024-10-08 18:30:41.323440] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.066 request: 00:23:23.066 { 00:23:23.066 "name": "nvme", 00:23:23.066 "trtype": "tcp", 00:23:23.066 "traddr": "10.0.0.2", 00:23:23.066 "adrfam": "ipv4", 00:23:23.066 "trsvcid": "8009", 00:23:23.066 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:23.066 "wait_for_attach": true, 00:23:23.066 "method": "bdev_nvme_start_discovery", 00:23:23.066 "req_id": 1 00:23:23.066 } 00:23:23.066 Got JSON-RPC error response 00:23:23.066 response: 00:23:23.066 { 00:23:23.066 "code": -17, 00:23:23.066 "message": "File exists" 00:23:23.066 } 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:23.066 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.324 request: 00:23:23.324 { 00:23:23.324 "name": "nvme_second", 00:23:23.324 "trtype": "tcp", 00:23:23.324 "traddr": "10.0.0.2", 00:23:23.324 "adrfam": "ipv4", 00:23:23.324 "trsvcid": "8009", 00:23:23.324 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:23.324 "wait_for_attach": true, 00:23:23.324 "method": "bdev_nvme_start_discovery", 00:23:23.324 "req_id": 1 00:23:23.324 } 00:23:23.324 Got JSON-RPC error response 00:23:23.324 response: 00:23:23.324 { 00:23:23.324 "code": -17, 00:23:23.324 "message": "File exists" 00:23:23.324 } 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.324 18:30:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.258 [2024-10-08 18:30:42.526899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.258 [2024-10-08 18:30:42.526983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209de00 with addr=10.0.0.2, port=8010 00:23:24.258 [2024-10-08 18:30:42.527017] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:24.258 [2024-10-08 18:30:42.527033] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:24.258 [2024-10-08 18:30:42.527047] bdev_nvme.c:7324:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:25.632 [2024-10-08 18:30:43.529304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.632 [2024-10-08 18:30:43.529368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209de00 with addr=10.0.0.2, port=8010 00:23:25.632 [2024-10-08 18:30:43.529395] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:25.632 [2024-10-08 18:30:43.529410] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:25.632 [2024-10-08 18:30:43.529423] bdev_nvme.c:7324:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:26.566 [2024-10-08 18:30:44.531458] bdev_nvme.c:7299:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:26.566 request: 00:23:26.566 { 00:23:26.566 "name": "nvme_second", 00:23:26.566 "trtype": "tcp", 00:23:26.566 "traddr": "10.0.0.2", 00:23:26.566 "adrfam": "ipv4", 00:23:26.566 "trsvcid": "8010", 00:23:26.566 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:26.566 "wait_for_attach": false, 00:23:26.566 "attach_timeout_ms": 3000, 00:23:26.566 "method": "bdev_nvme_start_discovery", 00:23:26.566 "req_id": 1 00:23:26.566 } 00:23:26.566 Got JSON-RPC error response 00:23:26.566 response: 00:23:26.566 { 00:23:26.566 "code": -110, 00:23:26.566 "message": "Connection timed out" 00:23:26.566 } 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1323832 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:26.566 rmmod nvme_tcp 00:23:26.566 rmmod nvme_fabrics 00:23:26.566 rmmod nvme_keyring 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1323813 ']' 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1323813 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1323813 ']' 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1323813 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1323813 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1323813' 00:23:26.566 killing process with pid 1323813 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1323813 00:23:26.566 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1323813 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.825 18:30:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.359 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:29.359 00:23:29.359 real 0m14.644s 00:23:29.359 user 0m21.423s 00:23:29.359 sys 0m3.049s 00:23:29.359 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.359 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.359 ************************************ 00:23:29.359 END TEST nvmf_host_discovery 00:23:29.359 ************************************ 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.360 ************************************ 00:23:29.360 START TEST nvmf_host_multipath_status 00:23:29.360 ************************************ 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:29.360 * Looking for test storage... 00:23:29.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:29.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.360 --rc genhtml_branch_coverage=1 00:23:29.360 --rc genhtml_function_coverage=1 00:23:29.360 --rc genhtml_legend=1 00:23:29.360 --rc geninfo_all_blocks=1 00:23:29.360 --rc geninfo_unexecuted_blocks=1 00:23:29.360 00:23:29.360 ' 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:29.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.360 --rc genhtml_branch_coverage=1 00:23:29.360 --rc genhtml_function_coverage=1 00:23:29.360 --rc genhtml_legend=1 00:23:29.360 --rc geninfo_all_blocks=1 00:23:29.360 --rc geninfo_unexecuted_blocks=1 00:23:29.360 00:23:29.360 ' 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:29.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.360 --rc genhtml_branch_coverage=1 00:23:29.360 --rc genhtml_function_coverage=1 00:23:29.360 --rc genhtml_legend=1 00:23:29.360 --rc geninfo_all_blocks=1 00:23:29.360 --rc geninfo_unexecuted_blocks=1 00:23:29.360 00:23:29.360 ' 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:29.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.360 --rc genhtml_branch_coverage=1 00:23:29.360 --rc genhtml_function_coverage=1 00:23:29.360 --rc genhtml_legend=1 00:23:29.360 --rc geninfo_all_blocks=1 00:23:29.360 --rc geninfo_unexecuted_blocks=1 00:23:29.360 00:23:29.360 ' 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.360 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.361 18:30:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:31.265 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:31.266 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:31.266 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:31.266 Found net devices under 0000:09:00.0: cvl_0_0 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:31.266 Found net devices under 0000:09:00.1: cvl_0_1 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:31.266 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:31.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:23:31.558 00:23:31.558 --- 10.0.0.2 ping statistics --- 00:23:31.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.558 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:31.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:23:31.558 00:23:31.558 --- 10.0.0.1 ping statistics --- 00:23:31.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.558 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1327133 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1327133 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1327133 ']' 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.558 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:31.558 [2024-10-08 18:30:49.644846] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:23:31.558 [2024-10-08 18:30:49.644936] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.558 [2024-10-08 18:30:49.708461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:31.558 [2024-10-08 18:30:49.814668] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.558 [2024-10-08 18:30:49.814723] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.558 [2024-10-08 18:30:49.814754] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.558 [2024-10-08 18:30:49.814766] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.558 [2024-10-08 18:30:49.814777] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.558 [2024-10-08 18:30:49.815549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.558 [2024-10-08 18:30:49.815555] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.841 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.841 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:31.841 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:31.841 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.841 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:31.841 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.841 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1327133 00:23:31.841 18:30:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:32.098 [2024-10-08 18:30:50.261409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.098 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:32.356 Malloc0 00:23:32.356 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:32.922 18:30:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:32.922 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.488 [2024-10-08 18:30:51.483598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.488 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:33.746 [2024-10-08 18:30:51.784430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:33.746 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1327423 00:23:33.746 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:33.746 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:33.746 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1327423 /var/tmp/bdevperf.sock 00:23:33.746 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1327423 ']' 00:23:33.746 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.746 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.746 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.746 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.746 18:30:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:34.005 18:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.005 18:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:34.005 18:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:34.262 18:30:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:34.831 Nvme0n1 00:23:34.831 18:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:35.398 Nvme0n1 00:23:35.398 18:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:35.398 18:30:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:37.296 18:30:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:37.296 18:30:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:37.554 18:30:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:38.120 18:30:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:39.055 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:39.055 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:39.055 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.055 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:39.313 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.313 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:39.313 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.313 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.571 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:39.571 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.571 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.571 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.829 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.829 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.829 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.829 18:30:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:40.087 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.087 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:40.087 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.087 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:40.344 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.344 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:40.344 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.344 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.602 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.602 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:40.602 18:30:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.860 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:41.117 18:30:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:42.052 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:42.052 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:42.052 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.052 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:42.310 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:42.310 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:42.310 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.310 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:42.876 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.876 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:42.876 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.876 18:31:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:43.134 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.134 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:43.134 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.134 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:43.392 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.392 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:43.392 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.392 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:43.650 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.650 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:43.650 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.650 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:43.908 18:31:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.908 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:43.908 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:44.165 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:44.423 18:31:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:45.357 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:45.357 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:45.357 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.357 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:45.615 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.615 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:45.615 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.615 18:31:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:45.873 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.873 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:45.873 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.873 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:46.131 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.131 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:46.131 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.131 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:46.389 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.389 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:46.389 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.389 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:46.646 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.647 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:46.647 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.647 18:31:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:47.213 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.213 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:47.213 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:47.213 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:47.778 18:31:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:48.711 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:48.711 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:48.712 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.712 18:31:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:48.969 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.969 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:48.969 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.969 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:49.227 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.227 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:49.227 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.227 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:49.485 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.485 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:49.485 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.485 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:49.743 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.743 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:49.743 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.743 18:31:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:50.001 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.001 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:50.001 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.001 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:50.258 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.258 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:50.258 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:50.515 18:31:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:50.773 18:31:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:52.145 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:52.145 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:52.145 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.145 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:52.145 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:52.145 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:52.145 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.145 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:52.410 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:52.410 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:52.410 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.410 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:52.674 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.674 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:52.674 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.674 18:31:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:52.931 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.931 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:52.931 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.931 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:53.195 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.195 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:53.195 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.195 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:53.454 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.454 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:53.454 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:53.712 18:31:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:53.968 18:31:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:55.382 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:55.382 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:55.382 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.382 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:55.382 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.382 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:55.382 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.382 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.647 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.647 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.647 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.647 18:31:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:55.905 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.905 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:55.905 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.905 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.163 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.163 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:56.163 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.163 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:56.421 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.421 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:56.421 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.421 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:56.680 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.680 18:31:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:56.938 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:56.938 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:57.196 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:57.453 18:31:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:58.827 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:58.827 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:58.827 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.827 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.827 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.827 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:58.827 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.827 18:31:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.085 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.085 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.085 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.085 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.344 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.344 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:59.344 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.344 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:59.602 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.602 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:59.602 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.602 18:31:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:59.860 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.860 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:59.860 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.860 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.118 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.118 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:00.118 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:00.376 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:00.634 18:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:02.007 18:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:02.007 18:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:02.007 18:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.007 18:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:02.007 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:02.007 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:02.007 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.007 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:02.265 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.265 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:02.265 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.265 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:02.523 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.523 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:02.523 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.523 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:02.781 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:02.781 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:02.781 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.781 18:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.039 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.039 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.039 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.039 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.297 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.297 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:03.297 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:03.555 18:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:04.121 18:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:05.056 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:05.056 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:05.056 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.056 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.314 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.314 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:05.314 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.314 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.572 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.572 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:05.572 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.572 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:05.830 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.830 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:05.830 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.830 18:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.089 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.089 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.089 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.089 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.347 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.347 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:06.347 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.347 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.605 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.605 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:06.605 18:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:06.864 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:07.122 18:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:08.057 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:08.057 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:08.057 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.057 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.315 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.315 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:08.315 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.315 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.882 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.882 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.882 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.882 18:31:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.882 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.882 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.882 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.882 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.449 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.449 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.449 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.449 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.449 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.449 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:09.449 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.449 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:10.015 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.015 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1327423 00:24:10.015 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1327423 ']' 00:24:10.015 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1327423 00:24:10.015 18:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:10.015 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:10.015 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1327423 00:24:10.015 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:10.015 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:10.015 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1327423' 00:24:10.015 killing process with pid 1327423 00:24:10.015 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1327423 00:24:10.015 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1327423 00:24:10.015 { 00:24:10.015 "results": [ 00:24:10.015 { 00:24:10.015 "job": "Nvme0n1", 00:24:10.015 "core_mask": "0x4", 00:24:10.015 "workload": "verify", 00:24:10.015 "status": "terminated", 00:24:10.015 "verify_range": { 00:24:10.015 "start": 0, 00:24:10.015 "length": 16384 00:24:10.015 }, 00:24:10.015 "queue_depth": 128, 00:24:10.015 "io_size": 4096, 00:24:10.015 "runtime": 34.414611, 00:24:10.015 "iops": 7949.385218969931, 00:24:10.015 "mibps": 31.052286011601293, 00:24:10.015 "io_failed": 0, 00:24:10.015 "io_timeout": 0, 00:24:10.015 "avg_latency_us": 16076.085426064355, 00:24:10.015 "min_latency_us": 187.35407407407408, 00:24:10.015 "max_latency_us": 4026531.84 00:24:10.015 } 00:24:10.015 ], 00:24:10.015 "core_count": 1 00:24:10.015 } 00:24:10.274 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1327423 00:24:10.274 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:10.274 [2024-10-08 18:30:51.848684] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:24:10.274 [2024-10-08 18:30:51.848770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327423 ] 00:24:10.274 [2024-10-08 18:30:51.908981] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.274 [2024-10-08 18:30:52.023779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.274 Running I/O for 90 seconds... 00:24:10.274 8297.00 IOPS, 32.41 MiB/s [2024-10-08T16:31:28.561Z] 8334.00 IOPS, 32.55 MiB/s [2024-10-08T16:31:28.561Z] 8314.00 IOPS, 32.48 MiB/s [2024-10-08T16:31:28.561Z] 8341.50 IOPS, 32.58 MiB/s [2024-10-08T16:31:28.561Z] 8342.00 IOPS, 32.59 MiB/s [2024-10-08T16:31:28.561Z] 8347.50 IOPS, 32.61 MiB/s [2024-10-08T16:31:28.561Z] 8356.00 IOPS, 32.64 MiB/s [2024-10-08T16:31:28.561Z] 8401.12 IOPS, 32.82 MiB/s [2024-10-08T16:31:28.561Z] 8409.56 IOPS, 32.85 MiB/s [2024-10-08T16:31:28.561Z] 8407.90 IOPS, 32.84 MiB/s [2024-10-08T16:31:28.561Z] 8407.00 IOPS, 32.84 MiB/s [2024-10-08T16:31:28.561Z] 8404.33 IOPS, 32.83 MiB/s [2024-10-08T16:31:28.561Z] 8395.46 IOPS, 32.79 MiB/s [2024-10-08T16:31:28.561Z] 8392.57 IOPS, 32.78 MiB/s [2024-10-08T16:31:28.561Z] 8393.53 IOPS, 32.79 MiB/s [2024-10-08T16:31:28.561Z] [2024-10-08 18:31:08.697546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.274 [2024-10-08 18:31:08.697601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:10.274 [2024-10-08 18:31:08.697668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.274 [2024-10-08 18:31:08.697691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:10.274 [2024-10-08 18:31:08.697716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.697733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.697757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.697774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.697813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.697829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.697866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.697883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.697906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.697922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.697944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.697975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.697999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.698955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.698977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.699009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.699051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.699396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.699451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.699496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.699538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.699582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.699623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.699665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.699706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.275 [2024-10-08 18:31:08.699747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.275 [2024-10-08 18:31:08.699790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.275 [2024-10-08 18:31:08.699831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.275 [2024-10-08 18:31:08.699886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.275 [2024-10-08 18:31:08.699927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.275 [2024-10-08 18:31:08.699966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.699995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.275 [2024-10-08 18:31:08.700012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.700051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.275 [2024-10-08 18:31:08.700069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.700094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.700110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.700136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.700152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.700177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.700194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.700219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.700236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.700261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.700278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.700311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.700329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.700354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.700371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.700396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.275 [2024-10-08 18:31:08.700412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:10.275 [2024-10-08 18:31:08.700436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.700973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.700998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.276 [2024-10-08 18:31:08.701197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.701868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.701884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.702030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.702066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.702105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.702124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.702152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.702168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.702196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.702212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.702239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.702256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.702283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.702324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.702378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.702396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:10.276 [2024-10-08 18:31:08.702424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.276 [2024-10-08 18:31:08.702440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.702962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.702978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.703020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.703062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.703105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:08.703803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:08.703830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:08.703846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:10.277 7924.38 IOPS, 30.95 MiB/s [2024-10-08T16:31:28.564Z] 7458.24 IOPS, 29.13 MiB/s [2024-10-08T16:31:28.564Z] 7043.89 IOPS, 27.52 MiB/s [2024-10-08T16:31:28.564Z] 6673.16 IOPS, 26.07 MiB/s [2024-10-08T16:31:28.564Z] 6721.20 IOPS, 26.25 MiB/s [2024-10-08T16:31:28.564Z] 6808.43 IOPS, 26.60 MiB/s [2024-10-08T16:31:28.564Z] 6906.27 IOPS, 26.98 MiB/s [2024-10-08T16:31:28.564Z] 7093.52 IOPS, 27.71 MiB/s [2024-10-08T16:31:28.564Z] 7267.50 IOPS, 28.39 MiB/s [2024-10-08T16:31:28.564Z] 7431.64 IOPS, 29.03 MiB/s [2024-10-08T16:31:28.564Z] 7472.00 IOPS, 29.19 MiB/s [2024-10-08T16:31:28.564Z] 7507.07 IOPS, 29.32 MiB/s [2024-10-08T16:31:28.564Z] 7538.39 IOPS, 29.45 MiB/s [2024-10-08T16:31:28.564Z] 7613.21 IOPS, 29.74 MiB/s [2024-10-08T16:31:28.564Z] 7732.20 IOPS, 30.20 MiB/s [2024-10-08T16:31:28.564Z] 7850.52 IOPS, 30.67 MiB/s [2024-10-08T16:31:28.564Z] [2024-10-08 18:31:25.279466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:25.279528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:25.279600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:25.279623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:25.279661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:25.279679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:25.279703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.277 [2024-10-08 18:31:25.279719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:25.279743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:25.279760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:25.279783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:25.279800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:10.277 [2024-10-08 18:31:25.279839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.277 [2024-10-08 18:31:25.279856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.279892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.279909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.280645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.280682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.280718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.280755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.280792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.280828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.280865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.280906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.280946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.280967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.280983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.281004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.281019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.281040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.281055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.281076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.281092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.281113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.278 [2024-10-08 18:31:25.281129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.282483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.282508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.282537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.282555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.282578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.282595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.282632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.282648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.282669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.282699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.282724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.282745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.282770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.282787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.282809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.282826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.282848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.282864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.278 [2024-10-08 18:31:25.282887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.278 [2024-10-08 18:31:25.282903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.282926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.279 [2024-10-08 18:31:25.282942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.282965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.279 [2024-10-08 18:31:25.282981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.283004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.279 [2024-10-08 18:31:25.283020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.283044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.279 [2024-10-08 18:31:25.283060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.279 [2024-10-08 18:31:25.284570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.279 [2024-10-08 18:31:25.284622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.279 [2024-10-08 18:31:25.284662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.279 [2024-10-08 18:31:25.284701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.279 [2024-10-08 18:31:25.284747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.279 [2024-10-08 18:31:25.284786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.279 [2024-10-08 18:31:25.284826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.279 [2024-10-08 18:31:25.284881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.279 [2024-10-08 18:31:25.284919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.279 [2024-10-08 18:31:25.284972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.284993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.279 [2024-10-08 18:31:25.285008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.285029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.279 [2024-10-08 18:31:25.285045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:10.279 [2024-10-08 18:31:25.285066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:10.279 [2024-10-08 18:31:25.285082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:10.279 7921.19 IOPS, 30.94 MiB/s [2024-10-08T16:31:28.566Z] 7938.48 IOPS, 31.01 MiB/s [2024-10-08T16:31:28.566Z] 7947.71 IOPS, 31.05 MiB/s [2024-10-08T16:31:28.566Z] Received shutdown signal, test time was about 34.415471 seconds 00:24:10.279 00:24:10.279 Latency(us) 00:24:10.279 [2024-10-08T16:31:28.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.279 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:10.279 Verification LBA range: start 0x0 length 0x4000 00:24:10.279 Nvme0n1 : 34.41 7949.39 31.05 0.00 0.00 16076.09 187.35 4026531.84 00:24:10.279 [2024-10-08T16:31:28.566Z] =================================================================================================================== 00:24:10.279 [2024-10-08T16:31:28.566Z] Total : 7949.39 31.05 0.00 0.00 16076.09 187.35 4026531.84 00:24:10.279 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:10.537 rmmod nvme_tcp 00:24:10.537 rmmod nvme_fabrics 00:24:10.537 rmmod nvme_keyring 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1327133 ']' 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1327133 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1327133 ']' 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1327133 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1327133 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1327133' 00:24:10.537 killing process with pid 1327133 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1327133 00:24:10.537 18:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1327133 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.796 18:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.331 00:24:13.331 real 0m43.993s 00:24:13.331 user 2m13.747s 00:24:13.331 sys 0m10.862s 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:13.331 ************************************ 00:24:13.331 END TEST nvmf_host_multipath_status 00:24:13.331 ************************************ 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.331 ************************************ 00:24:13.331 START TEST nvmf_discovery_remove_ifc 00:24:13.331 ************************************ 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:13.331 * Looking for test storage... 00:24:13.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:13.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.331 --rc genhtml_branch_coverage=1 00:24:13.331 --rc genhtml_function_coverage=1 00:24:13.331 --rc genhtml_legend=1 00:24:13.331 --rc geninfo_all_blocks=1 00:24:13.331 --rc geninfo_unexecuted_blocks=1 00:24:13.331 00:24:13.331 ' 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:13.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.331 --rc genhtml_branch_coverage=1 00:24:13.331 --rc genhtml_function_coverage=1 00:24:13.331 --rc genhtml_legend=1 00:24:13.331 --rc geninfo_all_blocks=1 00:24:13.331 --rc geninfo_unexecuted_blocks=1 00:24:13.331 00:24:13.331 ' 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:13.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.331 --rc genhtml_branch_coverage=1 00:24:13.331 --rc genhtml_function_coverage=1 00:24:13.331 --rc genhtml_legend=1 00:24:13.331 --rc geninfo_all_blocks=1 00:24:13.331 --rc geninfo_unexecuted_blocks=1 00:24:13.331 00:24:13.331 ' 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:13.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.331 --rc genhtml_branch_coverage=1 00:24:13.331 --rc genhtml_function_coverage=1 00:24:13.331 --rc genhtml_legend=1 00:24:13.331 --rc geninfo_all_blocks=1 00:24:13.331 --rc geninfo_unexecuted_blocks=1 00:24:13.331 00:24:13.331 ' 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.331 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:13.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:13.332 18:31:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.235 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:15.236 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:15.236 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:15.236 Found net devices under 0000:09:00.0: cvl_0_0 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:15.236 Found net devices under 0000:09:00.1: cvl_0_1 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:15.236 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.494 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:15.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:24:15.495 00:24:15.495 --- 10.0.0.2 ping statistics --- 00:24:15.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.495 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:24:15.495 00:24:15.495 --- 10.0.0.1 ping statistics --- 00:24:15.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.495 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1333892 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1333892 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1333892 ']' 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.495 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.495 [2024-10-08 18:31:33.634837] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:24:15.495 [2024-10-08 18:31:33.634941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.495 [2024-10-08 18:31:33.699894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.753 [2024-10-08 18:31:33.813008] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.753 [2024-10-08 18:31:33.813061] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.753 [2024-10-08 18:31:33.813090] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.753 [2024-10-08 18:31:33.813102] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.753 [2024-10-08 18:31:33.813112] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.753 [2024-10-08 18:31:33.813783] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.753 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.753 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:15.753 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:15.753 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.753 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.753 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.753 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:15.753 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.753 18:31:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.753 [2024-10-08 18:31:33.977012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.753 [2024-10-08 18:31:33.985231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:15.753 null0 00:24:15.753 [2024-10-08 18:31:34.017142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.753 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.753 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1333915 00:24:15.753 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1333915 /tmp/host.sock 00:24:15.753 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1333915 ']' 00:24:15.753 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:15.753 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:15.753 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.753 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:15.753 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:15.753 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.753 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.012 [2024-10-08 18:31:34.088335] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:24:16.012 [2024-10-08 18:31:34.088416] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333915 ] 00:24:16.012 [2024-10-08 18:31:34.151139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.012 [2024-10-08 18:31:34.262380] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.269 18:31:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.201 [2024-10-08 18:31:35.485494] bdev_nvme.c:7256:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:17.201 [2024-10-08 18:31:35.485530] bdev_nvme.c:7342:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:17.201 [2024-10-08 18:31:35.485556] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:17.459 [2024-10-08 18:31:35.571829] bdev_nvme.c:7185:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:17.717 [2024-10-08 18:31:35.796672] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:17.717 [2024-10-08 18:31:35.796739] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:17.717 [2024-10-08 18:31:35.796777] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:17.717 [2024-10-08 18:31:35.796798] bdev_nvme.c:7075:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:17.717 [2024-10-08 18:31:35.796827] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.717 [2024-10-08 18:31:35.803431] bdev_nvme.c:1739:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xac3070 was disconnected and freed. delete nvme_qpair. 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:17.717 18:31:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:18.650 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:18.650 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.650 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:18.650 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.650 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.650 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:18.650 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:18.650 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.908 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:18.908 18:31:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:19.842 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.842 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.842 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.842 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.842 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.842 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.842 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.842 18:31:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.842 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:19.842 18:31:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.775 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.775 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.775 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.775 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.775 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.775 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.775 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.775 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.775 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:20.775 18:31:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:22.211 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:22.211 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.211 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:22.211 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.211 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.211 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:22.211 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:22.211 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.211 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:22.211 18:31:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:23.168 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.168 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.168 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.168 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.168 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.168 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.168 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.168 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.168 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:23.168 18:31:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:23.168 [2024-10-08 18:31:41.238357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:23.168 [2024-10-08 18:31:41.238438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.168 [2024-10-08 18:31:41.238460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.168 [2024-10-08 18:31:41.238477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.168 [2024-10-08 18:31:41.238491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.168 [2024-10-08 18:31:41.238505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.168 [2024-10-08 18:31:41.238518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.168 [2024-10-08 18:31:41.238532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.168 [2024-10-08 18:31:41.238545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.168 [2024-10-08 18:31:41.238567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.168 [2024-10-08 18:31:41.238595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.168 [2024-10-08 18:31:41.238608] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f8d0 is same with the state(6) to be set 00:24:23.168 [2024-10-08 18:31:41.248365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9f8d0 (9): Bad file descriptor 00:24:23.168 [2024-10-08 18:31:41.258411] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:24.100 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:24.100 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.100 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.100 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.100 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:24.100 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:24.100 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:24.100 [2024-10-08 18:31:42.285345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:24.100 [2024-10-08 18:31:42.285426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa9f8d0 with addr=10.0.0.2, port=4420 00:24:24.100 [2024-10-08 18:31:42.285453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa9f8d0 is same with the state(6) to be set 00:24:24.100 [2024-10-08 18:31:42.285505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa9f8d0 (9): Bad file descriptor 00:24:24.100 [2024-10-08 18:31:42.285974] bdev_nvme.c:3069:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:24.100 [2024-10-08 18:31:42.286017] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:24.100 [2024-10-08 18:31:42.286032] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:24.100 [2024-10-08 18:31:42.286048] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:24.100 [2024-10-08 18:31:42.286081] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.100 [2024-10-08 18:31:42.286097] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:24.100 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.100 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:24.100 18:31:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:25.033 [2024-10-08 18:31:43.288596] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:25.033 [2024-10-08 18:31:43.288652] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:25.033 [2024-10-08 18:31:43.288667] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:25.033 [2024-10-08 18:31:43.288680] nvme_ctrlr.c:1114:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:25.033 [2024-10-08 18:31:43.288720] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:25.033 [2024-10-08 18:31:43.288756] bdev_nvme.c:7007:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:25.033 [2024-10-08 18:31:43.288815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.033 [2024-10-08 18:31:43.288837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.033 [2024-10-08 18:31:43.288855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.033 [2024-10-08 18:31:43.288868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.033 [2024-10-08 18:31:43.288882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.033 [2024-10-08 18:31:43.288896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.033 [2024-10-08 18:31:43.288910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.033 [2024-10-08 18:31:43.288923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.034 [2024-10-08 18:31:43.288945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.034 [2024-10-08 18:31:43.288959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.034 [2024-10-08 18:31:43.288972] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:25.034 [2024-10-08 18:31:43.289096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8ec00 (9): Bad file descriptor 00:24:25.034 [2024-10-08 18:31:43.290112] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:25.034 [2024-10-08 18:31:43.290133] nvme_ctrlr.c:1233:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:25.034 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.034 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.034 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.034 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.034 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.034 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.034 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.034 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.291 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:25.292 18:31:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.226 18:31:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.226 18:31:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.226 18:31:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.226 18:31:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.226 18:31:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.226 18:31:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.226 18:31:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.226 18:31:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.226 18:31:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:26.226 18:31:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:27.159 [2024-10-08 18:31:45.341977] bdev_nvme.c:7256:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:27.159 [2024-10-08 18:31:45.342010] bdev_nvme.c:7342:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:27.159 [2024-10-08 18:31:45.342032] bdev_nvme.c:7219:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:27.417 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.417 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.417 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.417 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.417 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.417 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.417 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.417 [2024-10-08 18:31:45.470448] bdev_nvme.c:7185:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:27.417 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.417 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:27.417 18:31:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:27.417 [2024-10-08 18:31:45.572581] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:27.417 [2024-10-08 18:31:45.572646] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:27.417 [2024-10-08 18:31:45.572679] bdev_nvme.c:8052:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:27.417 [2024-10-08 18:31:45.572701] bdev_nvme.c:7075:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:27.417 [2024-10-08 18:31:45.572714] bdev_nvme.c:7034:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:27.417 [2024-10-08 18:31:45.579320] bdev_nvme.c:1739:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xaa9e00 was disconnected and freed. delete nvme_qpair. 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1333915 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1333915 ']' 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1333915 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333915 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333915' 00:24:28.390 killing process with pid 1333915 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1333915 00:24:28.390 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1333915 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.648 rmmod nvme_tcp 00:24:28.648 rmmod nvme_fabrics 00:24:28.648 rmmod nvme_keyring 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1333892 ']' 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1333892 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1333892 ']' 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1333892 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.648 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333892 00:24:28.906 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:28.906 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:28.906 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333892' 00:24:28.906 killing process with pid 1333892 00:24:28.906 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1333892 00:24:28.906 18:31:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1333892 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.164 18:31:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.070 18:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.070 00:24:31.070 real 0m18.150s 00:24:31.070 user 0m26.220s 00:24:31.070 sys 0m3.110s 00:24:31.070 18:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.070 18:31:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.070 ************************************ 00:24:31.070 END TEST nvmf_discovery_remove_ifc 00:24:31.070 ************************************ 00:24:31.070 18:31:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:31.070 18:31:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.070 18:31:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.070 18:31:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.070 ************************************ 00:24:31.070 START TEST nvmf_identify_kernel_target 00:24:31.070 ************************************ 00:24:31.070 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:31.328 * Looking for test storage... 00:24:31.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.328 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:31.328 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:24:31.328 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:31.328 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:31.328 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.328 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.328 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.328 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.328 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:31.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.329 --rc genhtml_branch_coverage=1 00:24:31.329 --rc genhtml_function_coverage=1 00:24:31.329 --rc genhtml_legend=1 00:24:31.329 --rc geninfo_all_blocks=1 00:24:31.329 --rc geninfo_unexecuted_blocks=1 00:24:31.329 00:24:31.329 ' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:31.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.329 --rc genhtml_branch_coverage=1 00:24:31.329 --rc genhtml_function_coverage=1 00:24:31.329 --rc genhtml_legend=1 00:24:31.329 --rc geninfo_all_blocks=1 00:24:31.329 --rc geninfo_unexecuted_blocks=1 00:24:31.329 00:24:31.329 ' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:31.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.329 --rc genhtml_branch_coverage=1 00:24:31.329 --rc genhtml_function_coverage=1 00:24:31.329 --rc genhtml_legend=1 00:24:31.329 --rc geninfo_all_blocks=1 00:24:31.329 --rc geninfo_unexecuted_blocks=1 00:24:31.329 00:24:31.329 ' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:31.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.329 --rc genhtml_branch_coverage=1 00:24:31.329 --rc genhtml_function_coverage=1 00:24:31.329 --rc genhtml_legend=1 00:24:31.329 --rc geninfo_all_blocks=1 00:24:31.329 --rc geninfo_unexecuted_blocks=1 00:24:31.329 00:24:31.329 ' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.329 18:31:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:33.866 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:33.866 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:33.866 Found net devices under 0000:09:00.0: cvl_0_0 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.866 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:33.867 Found net devices under 0000:09:00.1: cvl_0_1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:24:33.867 00:24:33.867 --- 10.0.0.2 ping statistics --- 00:24:33.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.867 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:24:33.867 00:24:33.867 --- 10.0.0.1 ping statistics --- 00:24:33.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.867 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:33.867 18:31:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:34.802 Waiting for block devices as requested 00:24:34.803 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:34.803 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:35.060 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:35.060 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:35.060 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:35.318 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:35.318 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:35.318 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:35.318 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:24:35.577 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:35.577 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:35.836 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:35.836 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:35.836 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:35.836 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:36.095 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:36.095 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:36.095 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:24:36.095 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:36.095 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:24:36.095 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:36.095 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:36.095 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:36.095 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:24:36.095 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:36.095 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:36.353 No valid GPT data, bailing 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:24:36.353 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:24:36.354 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:24:36.354 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:36.354 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:24:36.354 00:24:36.354 Discovery Log Number of Records 2, Generation counter 2 00:24:36.354 =====Discovery Log Entry 0====== 00:24:36.354 trtype: tcp 00:24:36.354 adrfam: ipv4 00:24:36.354 subtype: current discovery subsystem 00:24:36.354 treq: not specified, sq flow control disable supported 00:24:36.354 portid: 1 00:24:36.354 trsvcid: 4420 00:24:36.354 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:36.354 traddr: 10.0.0.1 00:24:36.354 eflags: none 00:24:36.354 sectype: none 00:24:36.354 =====Discovery Log Entry 1====== 00:24:36.354 trtype: tcp 00:24:36.354 adrfam: ipv4 00:24:36.354 subtype: nvme subsystem 00:24:36.354 treq: not specified, sq flow control disable supported 00:24:36.354 portid: 1 00:24:36.354 trsvcid: 4420 00:24:36.354 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:36.354 traddr: 10.0.0.1 00:24:36.354 eflags: none 00:24:36.354 sectype: none 00:24:36.354 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:36.354 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:36.354 ===================================================== 00:24:36.354 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:36.354 ===================================================== 00:24:36.354 Controller Capabilities/Features 00:24:36.354 ================================ 00:24:36.354 Vendor ID: 0000 00:24:36.354 Subsystem Vendor ID: 0000 00:24:36.354 Serial Number: 996c6556869d38a4cf13 00:24:36.354 Model Number: Linux 00:24:36.354 Firmware Version: 6.8.9-20 00:24:36.354 Recommended Arb Burst: 0 00:24:36.354 IEEE OUI Identifier: 00 00 00 00:24:36.354 Multi-path I/O 00:24:36.354 May have multiple subsystem ports: No 00:24:36.354 May have multiple controllers: No 00:24:36.354 Associated with SR-IOV VF: No 00:24:36.354 Max Data Transfer Size: Unlimited 00:24:36.354 Max Number of Namespaces: 0 00:24:36.354 Max Number of I/O Queues: 1024 00:24:36.354 NVMe Specification Version (VS): 1.3 00:24:36.354 NVMe Specification Version (Identify): 1.3 00:24:36.354 Maximum Queue Entries: 1024 00:24:36.354 Contiguous Queues Required: No 00:24:36.354 Arbitration Mechanisms Supported 00:24:36.354 Weighted Round Robin: Not Supported 00:24:36.354 Vendor Specific: Not Supported 00:24:36.354 Reset Timeout: 7500 ms 00:24:36.354 Doorbell Stride: 4 bytes 00:24:36.354 NVM Subsystem Reset: Not Supported 00:24:36.354 Command Sets Supported 00:24:36.354 NVM Command Set: Supported 00:24:36.354 Boot Partition: Not Supported 00:24:36.354 Memory Page Size Minimum: 4096 bytes 00:24:36.354 Memory Page Size Maximum: 4096 bytes 00:24:36.354 Persistent Memory Region: Not Supported 00:24:36.354 Optional Asynchronous Events Supported 00:24:36.354 Namespace Attribute Notices: Not Supported 00:24:36.354 Firmware Activation Notices: Not Supported 00:24:36.354 ANA Change Notices: Not Supported 00:24:36.354 PLE Aggregate Log Change Notices: Not Supported 00:24:36.354 LBA Status Info Alert Notices: Not Supported 00:24:36.354 EGE Aggregate Log Change Notices: Not Supported 00:24:36.354 Normal NVM Subsystem Shutdown event: Not Supported 00:24:36.354 Zone Descriptor Change Notices: Not Supported 00:24:36.354 Discovery Log Change Notices: Supported 00:24:36.354 Controller Attributes 00:24:36.354 128-bit Host Identifier: Not Supported 00:24:36.354 Non-Operational Permissive Mode: Not Supported 00:24:36.354 NVM Sets: Not Supported 00:24:36.354 Read Recovery Levels: Not Supported 00:24:36.354 Endurance Groups: Not Supported 00:24:36.354 Predictable Latency Mode: Not Supported 00:24:36.354 Traffic Based Keep ALive: Not Supported 00:24:36.354 Namespace Granularity: Not Supported 00:24:36.354 SQ Associations: Not Supported 00:24:36.354 UUID List: Not Supported 00:24:36.354 Multi-Domain Subsystem: Not Supported 00:24:36.354 Fixed Capacity Management: Not Supported 00:24:36.354 Variable Capacity Management: Not Supported 00:24:36.354 Delete Endurance Group: Not Supported 00:24:36.354 Delete NVM Set: Not Supported 00:24:36.354 Extended LBA Formats Supported: Not Supported 00:24:36.354 Flexible Data Placement Supported: Not Supported 00:24:36.354 00:24:36.354 Controller Memory Buffer Support 00:24:36.354 ================================ 00:24:36.354 Supported: No 00:24:36.354 00:24:36.354 Persistent Memory Region Support 00:24:36.354 ================================ 00:24:36.354 Supported: No 00:24:36.354 00:24:36.354 Admin Command Set Attributes 00:24:36.354 ============================ 00:24:36.354 Security Send/Receive: Not Supported 00:24:36.354 Format NVM: Not Supported 00:24:36.354 Firmware Activate/Download: Not Supported 00:24:36.354 Namespace Management: Not Supported 00:24:36.354 Device Self-Test: Not Supported 00:24:36.354 Directives: Not Supported 00:24:36.354 NVMe-MI: Not Supported 00:24:36.354 Virtualization Management: Not Supported 00:24:36.354 Doorbell Buffer Config: Not Supported 00:24:36.354 Get LBA Status Capability: Not Supported 00:24:36.354 Command & Feature Lockdown Capability: Not Supported 00:24:36.354 Abort Command Limit: 1 00:24:36.354 Async Event Request Limit: 1 00:24:36.354 Number of Firmware Slots: N/A 00:24:36.354 Firmware Slot 1 Read-Only: N/A 00:24:36.354 Firmware Activation Without Reset: N/A 00:24:36.354 Multiple Update Detection Support: N/A 00:24:36.354 Firmware Update Granularity: No Information Provided 00:24:36.354 Per-Namespace SMART Log: No 00:24:36.354 Asymmetric Namespace Access Log Page: Not Supported 00:24:36.354 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:36.354 Command Effects Log Page: Not Supported 00:24:36.354 Get Log Page Extended Data: Supported 00:24:36.354 Telemetry Log Pages: Not Supported 00:24:36.354 Persistent Event Log Pages: Not Supported 00:24:36.354 Supported Log Pages Log Page: May Support 00:24:36.354 Commands Supported & Effects Log Page: Not Supported 00:24:36.354 Feature Identifiers & Effects Log Page:May Support 00:24:36.354 NVMe-MI Commands & Effects Log Page: May Support 00:24:36.354 Data Area 4 for Telemetry Log: Not Supported 00:24:36.354 Error Log Page Entries Supported: 1 00:24:36.354 Keep Alive: Not Supported 00:24:36.354 00:24:36.354 NVM Command Set Attributes 00:24:36.354 ========================== 00:24:36.354 Submission Queue Entry Size 00:24:36.354 Max: 1 00:24:36.354 Min: 1 00:24:36.354 Completion Queue Entry Size 00:24:36.354 Max: 1 00:24:36.354 Min: 1 00:24:36.354 Number of Namespaces: 0 00:24:36.354 Compare Command: Not Supported 00:24:36.354 Write Uncorrectable Command: Not Supported 00:24:36.354 Dataset Management Command: Not Supported 00:24:36.354 Write Zeroes Command: Not Supported 00:24:36.354 Set Features Save Field: Not Supported 00:24:36.354 Reservations: Not Supported 00:24:36.354 Timestamp: Not Supported 00:24:36.354 Copy: Not Supported 00:24:36.354 Volatile Write Cache: Not Present 00:24:36.354 Atomic Write Unit (Normal): 1 00:24:36.354 Atomic Write Unit (PFail): 1 00:24:36.354 Atomic Compare & Write Unit: 1 00:24:36.354 Fused Compare & Write: Not Supported 00:24:36.354 Scatter-Gather List 00:24:36.354 SGL Command Set: Supported 00:24:36.354 SGL Keyed: Not Supported 00:24:36.354 SGL Bit Bucket Descriptor: Not Supported 00:24:36.354 SGL Metadata Pointer: Not Supported 00:24:36.354 Oversized SGL: Not Supported 00:24:36.354 SGL Metadata Address: Not Supported 00:24:36.354 SGL Offset: Supported 00:24:36.354 Transport SGL Data Block: Not Supported 00:24:36.354 Replay Protected Memory Block: Not Supported 00:24:36.354 00:24:36.354 Firmware Slot Information 00:24:36.354 ========================= 00:24:36.354 Active slot: 0 00:24:36.354 00:24:36.354 00:24:36.354 Error Log 00:24:36.354 ========= 00:24:36.354 00:24:36.354 Active Namespaces 00:24:36.354 ================= 00:24:36.354 Discovery Log Page 00:24:36.354 ================== 00:24:36.354 Generation Counter: 2 00:24:36.354 Number of Records: 2 00:24:36.354 Record Format: 0 00:24:36.354 00:24:36.354 Discovery Log Entry 0 00:24:36.354 ---------------------- 00:24:36.354 Transport Type: 3 (TCP) 00:24:36.354 Address Family: 1 (IPv4) 00:24:36.354 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:36.354 Entry Flags: 00:24:36.354 Duplicate Returned Information: 0 00:24:36.354 Explicit Persistent Connection Support for Discovery: 0 00:24:36.354 Transport Requirements: 00:24:36.355 Secure Channel: Not Specified 00:24:36.355 Port ID: 1 (0x0001) 00:24:36.355 Controller ID: 65535 (0xffff) 00:24:36.355 Admin Max SQ Size: 32 00:24:36.355 Transport Service Identifier: 4420 00:24:36.355 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:36.355 Transport Address: 10.0.0.1 00:24:36.355 Discovery Log Entry 1 00:24:36.355 ---------------------- 00:24:36.355 Transport Type: 3 (TCP) 00:24:36.355 Address Family: 1 (IPv4) 00:24:36.355 Subsystem Type: 2 (NVM Subsystem) 00:24:36.355 Entry Flags: 00:24:36.355 Duplicate Returned Information: 0 00:24:36.355 Explicit Persistent Connection Support for Discovery: 0 00:24:36.355 Transport Requirements: 00:24:36.355 Secure Channel: Not Specified 00:24:36.355 Port ID: 1 (0x0001) 00:24:36.355 Controller ID: 65535 (0xffff) 00:24:36.355 Admin Max SQ Size: 32 00:24:36.355 Transport Service Identifier: 4420 00:24:36.355 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:36.355 Transport Address: 10.0.0.1 00:24:36.355 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:36.613 get_feature(0x01) failed 00:24:36.613 get_feature(0x02) failed 00:24:36.613 get_feature(0x04) failed 00:24:36.613 ===================================================== 00:24:36.613 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:36.613 ===================================================== 00:24:36.613 Controller Capabilities/Features 00:24:36.613 ================================ 00:24:36.613 Vendor ID: 0000 00:24:36.613 Subsystem Vendor ID: 0000 00:24:36.613 Serial Number: 0a388dd6422ad56933bd 00:24:36.613 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:36.613 Firmware Version: 6.8.9-20 00:24:36.613 Recommended Arb Burst: 6 00:24:36.613 IEEE OUI Identifier: 00 00 00 00:24:36.613 Multi-path I/O 00:24:36.613 May have multiple subsystem ports: Yes 00:24:36.613 May have multiple controllers: Yes 00:24:36.613 Associated with SR-IOV VF: No 00:24:36.613 Max Data Transfer Size: Unlimited 00:24:36.613 Max Number of Namespaces: 1024 00:24:36.613 Max Number of I/O Queues: 128 00:24:36.613 NVMe Specification Version (VS): 1.3 00:24:36.614 NVMe Specification Version (Identify): 1.3 00:24:36.614 Maximum Queue Entries: 1024 00:24:36.614 Contiguous Queues Required: No 00:24:36.614 Arbitration Mechanisms Supported 00:24:36.614 Weighted Round Robin: Not Supported 00:24:36.614 Vendor Specific: Not Supported 00:24:36.614 Reset Timeout: 7500 ms 00:24:36.614 Doorbell Stride: 4 bytes 00:24:36.614 NVM Subsystem Reset: Not Supported 00:24:36.614 Command Sets Supported 00:24:36.614 NVM Command Set: Supported 00:24:36.614 Boot Partition: Not Supported 00:24:36.614 Memory Page Size Minimum: 4096 bytes 00:24:36.614 Memory Page Size Maximum: 4096 bytes 00:24:36.614 Persistent Memory Region: Not Supported 00:24:36.614 Optional Asynchronous Events Supported 00:24:36.614 Namespace Attribute Notices: Supported 00:24:36.614 Firmware Activation Notices: Not Supported 00:24:36.614 ANA Change Notices: Supported 00:24:36.614 PLE Aggregate Log Change Notices: Not Supported 00:24:36.614 LBA Status Info Alert Notices: Not Supported 00:24:36.614 EGE Aggregate Log Change Notices: Not Supported 00:24:36.614 Normal NVM Subsystem Shutdown event: Not Supported 00:24:36.614 Zone Descriptor Change Notices: Not Supported 00:24:36.614 Discovery Log Change Notices: Not Supported 00:24:36.614 Controller Attributes 00:24:36.614 128-bit Host Identifier: Supported 00:24:36.614 Non-Operational Permissive Mode: Not Supported 00:24:36.614 NVM Sets: Not Supported 00:24:36.614 Read Recovery Levels: Not Supported 00:24:36.614 Endurance Groups: Not Supported 00:24:36.614 Predictable Latency Mode: Not Supported 00:24:36.614 Traffic Based Keep ALive: Supported 00:24:36.614 Namespace Granularity: Not Supported 00:24:36.614 SQ Associations: Not Supported 00:24:36.614 UUID List: Not Supported 00:24:36.614 Multi-Domain Subsystem: Not Supported 00:24:36.614 Fixed Capacity Management: Not Supported 00:24:36.614 Variable Capacity Management: Not Supported 00:24:36.614 Delete Endurance Group: Not Supported 00:24:36.614 Delete NVM Set: Not Supported 00:24:36.614 Extended LBA Formats Supported: Not Supported 00:24:36.614 Flexible Data Placement Supported: Not Supported 00:24:36.614 00:24:36.614 Controller Memory Buffer Support 00:24:36.614 ================================ 00:24:36.614 Supported: No 00:24:36.614 00:24:36.614 Persistent Memory Region Support 00:24:36.614 ================================ 00:24:36.614 Supported: No 00:24:36.614 00:24:36.614 Admin Command Set Attributes 00:24:36.614 ============================ 00:24:36.614 Security Send/Receive: Not Supported 00:24:36.614 Format NVM: Not Supported 00:24:36.614 Firmware Activate/Download: Not Supported 00:24:36.614 Namespace Management: Not Supported 00:24:36.614 Device Self-Test: Not Supported 00:24:36.614 Directives: Not Supported 00:24:36.614 NVMe-MI: Not Supported 00:24:36.614 Virtualization Management: Not Supported 00:24:36.614 Doorbell Buffer Config: Not Supported 00:24:36.614 Get LBA Status Capability: Not Supported 00:24:36.614 Command & Feature Lockdown Capability: Not Supported 00:24:36.614 Abort Command Limit: 4 00:24:36.614 Async Event Request Limit: 4 00:24:36.614 Number of Firmware Slots: N/A 00:24:36.614 Firmware Slot 1 Read-Only: N/A 00:24:36.614 Firmware Activation Without Reset: N/A 00:24:36.614 Multiple Update Detection Support: N/A 00:24:36.614 Firmware Update Granularity: No Information Provided 00:24:36.614 Per-Namespace SMART Log: Yes 00:24:36.614 Asymmetric Namespace Access Log Page: Supported 00:24:36.614 ANA Transition Time : 10 sec 00:24:36.614 00:24:36.614 Asymmetric Namespace Access Capabilities 00:24:36.614 ANA Optimized State : Supported 00:24:36.614 ANA Non-Optimized State : Supported 00:24:36.614 ANA Inaccessible State : Supported 00:24:36.614 ANA Persistent Loss State : Supported 00:24:36.614 ANA Change State : Supported 00:24:36.614 ANAGRPID is not changed : No 00:24:36.614 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:36.614 00:24:36.614 ANA Group Identifier Maximum : 128 00:24:36.614 Number of ANA Group Identifiers : 128 00:24:36.614 Max Number of Allowed Namespaces : 1024 00:24:36.614 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:36.614 Command Effects Log Page: Supported 00:24:36.614 Get Log Page Extended Data: Supported 00:24:36.614 Telemetry Log Pages: Not Supported 00:24:36.614 Persistent Event Log Pages: Not Supported 00:24:36.614 Supported Log Pages Log Page: May Support 00:24:36.614 Commands Supported & Effects Log Page: Not Supported 00:24:36.614 Feature Identifiers & Effects Log Page:May Support 00:24:36.614 NVMe-MI Commands & Effects Log Page: May Support 00:24:36.614 Data Area 4 for Telemetry Log: Not Supported 00:24:36.614 Error Log Page Entries Supported: 128 00:24:36.614 Keep Alive: Supported 00:24:36.614 Keep Alive Granularity: 1000 ms 00:24:36.614 00:24:36.614 NVM Command Set Attributes 00:24:36.614 ========================== 00:24:36.614 Submission Queue Entry Size 00:24:36.614 Max: 64 00:24:36.614 Min: 64 00:24:36.614 Completion Queue Entry Size 00:24:36.614 Max: 16 00:24:36.614 Min: 16 00:24:36.614 Number of Namespaces: 1024 00:24:36.614 Compare Command: Not Supported 00:24:36.614 Write Uncorrectable Command: Not Supported 00:24:36.614 Dataset Management Command: Supported 00:24:36.614 Write Zeroes Command: Supported 00:24:36.614 Set Features Save Field: Not Supported 00:24:36.614 Reservations: Not Supported 00:24:36.614 Timestamp: Not Supported 00:24:36.614 Copy: Not Supported 00:24:36.614 Volatile Write Cache: Present 00:24:36.614 Atomic Write Unit (Normal): 1 00:24:36.614 Atomic Write Unit (PFail): 1 00:24:36.614 Atomic Compare & Write Unit: 1 00:24:36.614 Fused Compare & Write: Not Supported 00:24:36.614 Scatter-Gather List 00:24:36.614 SGL Command Set: Supported 00:24:36.614 SGL Keyed: Not Supported 00:24:36.614 SGL Bit Bucket Descriptor: Not Supported 00:24:36.614 SGL Metadata Pointer: Not Supported 00:24:36.614 Oversized SGL: Not Supported 00:24:36.614 SGL Metadata Address: Not Supported 00:24:36.614 SGL Offset: Supported 00:24:36.614 Transport SGL Data Block: Not Supported 00:24:36.614 Replay Protected Memory Block: Not Supported 00:24:36.614 00:24:36.614 Firmware Slot Information 00:24:36.614 ========================= 00:24:36.614 Active slot: 0 00:24:36.614 00:24:36.614 Asymmetric Namespace Access 00:24:36.614 =========================== 00:24:36.614 Change Count : 0 00:24:36.614 Number of ANA Group Descriptors : 1 00:24:36.614 ANA Group Descriptor : 0 00:24:36.614 ANA Group ID : 1 00:24:36.614 Number of NSID Values : 1 00:24:36.614 Change Count : 0 00:24:36.614 ANA State : 1 00:24:36.614 Namespace Identifier : 1 00:24:36.614 00:24:36.614 Commands Supported and Effects 00:24:36.614 ============================== 00:24:36.614 Admin Commands 00:24:36.614 -------------- 00:24:36.614 Get Log Page (02h): Supported 00:24:36.614 Identify (06h): Supported 00:24:36.614 Abort (08h): Supported 00:24:36.614 Set Features (09h): Supported 00:24:36.614 Get Features (0Ah): Supported 00:24:36.614 Asynchronous Event Request (0Ch): Supported 00:24:36.614 Keep Alive (18h): Supported 00:24:36.614 I/O Commands 00:24:36.614 ------------ 00:24:36.614 Flush (00h): Supported 00:24:36.614 Write (01h): Supported LBA-Change 00:24:36.614 Read (02h): Supported 00:24:36.614 Write Zeroes (08h): Supported LBA-Change 00:24:36.614 Dataset Management (09h): Supported 00:24:36.614 00:24:36.614 Error Log 00:24:36.614 ========= 00:24:36.614 Entry: 0 00:24:36.614 Error Count: 0x3 00:24:36.614 Submission Queue Id: 0x0 00:24:36.614 Command Id: 0x5 00:24:36.614 Phase Bit: 0 00:24:36.614 Status Code: 0x2 00:24:36.614 Status Code Type: 0x0 00:24:36.614 Do Not Retry: 1 00:24:36.614 Error Location: 0x28 00:24:36.614 LBA: 0x0 00:24:36.614 Namespace: 0x0 00:24:36.614 Vendor Log Page: 0x0 00:24:36.614 ----------- 00:24:36.614 Entry: 1 00:24:36.614 Error Count: 0x2 00:24:36.614 Submission Queue Id: 0x0 00:24:36.614 Command Id: 0x5 00:24:36.614 Phase Bit: 0 00:24:36.614 Status Code: 0x2 00:24:36.614 Status Code Type: 0x0 00:24:36.614 Do Not Retry: 1 00:24:36.614 Error Location: 0x28 00:24:36.614 LBA: 0x0 00:24:36.614 Namespace: 0x0 00:24:36.614 Vendor Log Page: 0x0 00:24:36.614 ----------- 00:24:36.614 Entry: 2 00:24:36.614 Error Count: 0x1 00:24:36.614 Submission Queue Id: 0x0 00:24:36.614 Command Id: 0x4 00:24:36.614 Phase Bit: 0 00:24:36.614 Status Code: 0x2 00:24:36.614 Status Code Type: 0x0 00:24:36.614 Do Not Retry: 1 00:24:36.614 Error Location: 0x28 00:24:36.614 LBA: 0x0 00:24:36.614 Namespace: 0x0 00:24:36.614 Vendor Log Page: 0x0 00:24:36.614 00:24:36.614 Number of Queues 00:24:36.614 ================ 00:24:36.614 Number of I/O Submission Queues: 128 00:24:36.614 Number of I/O Completion Queues: 128 00:24:36.614 00:24:36.614 ZNS Specific Controller Data 00:24:36.614 ============================ 00:24:36.614 Zone Append Size Limit: 0 00:24:36.614 00:24:36.614 00:24:36.614 Active Namespaces 00:24:36.614 ================= 00:24:36.615 get_feature(0x05) failed 00:24:36.615 Namespace ID:1 00:24:36.615 Command Set Identifier: NVM (00h) 00:24:36.615 Deallocate: Supported 00:24:36.615 Deallocated/Unwritten Error: Not Supported 00:24:36.615 Deallocated Read Value: Unknown 00:24:36.615 Deallocate in Write Zeroes: Not Supported 00:24:36.615 Deallocated Guard Field: 0xFFFF 00:24:36.615 Flush: Supported 00:24:36.615 Reservation: Not Supported 00:24:36.615 Namespace Sharing Capabilities: Multiple Controllers 00:24:36.615 Size (in LBAs): 1953525168 (931GiB) 00:24:36.615 Capacity (in LBAs): 1953525168 (931GiB) 00:24:36.615 Utilization (in LBAs): 1953525168 (931GiB) 00:24:36.615 UUID: bc6e7185-3e50-4547-a18b-030c731a5d19 00:24:36.615 Thin Provisioning: Not Supported 00:24:36.615 Per-NS Atomic Units: Yes 00:24:36.615 Atomic Boundary Size (Normal): 0 00:24:36.615 Atomic Boundary Size (PFail): 0 00:24:36.615 Atomic Boundary Offset: 0 00:24:36.615 NGUID/EUI64 Never Reused: No 00:24:36.615 ANA group ID: 1 00:24:36.615 Namespace Write Protected: No 00:24:36.615 Number of LBA Formats: 1 00:24:36.615 Current LBA Format: LBA Format #00 00:24:36.615 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:36.615 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:36.615 rmmod nvme_tcp 00:24:36.615 rmmod nvme_fabrics 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.615 18:31:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.519 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:38.779 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:38.779 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:38.779 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:24:38.779 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:38.779 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:38.779 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:38.779 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:38.779 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:24:38.779 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:24:38.779 18:31:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:40.155 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:40.155 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:40.155 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:40.155 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:40.155 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:40.155 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:40.155 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:40.155 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:40.155 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:40.155 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:40.155 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:40.155 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:40.155 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:40.155 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:40.155 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:40.155 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:41.089 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:24:41.089 00:24:41.089 real 0m9.967s 00:24:41.089 user 0m2.200s 00:24:41.089 sys 0m3.698s 00:24:41.089 18:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:41.089 18:31:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.089 ************************************ 00:24:41.089 END TEST nvmf_identify_kernel_target 00:24:41.089 ************************************ 00:24:41.089 18:31:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:41.089 18:31:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:41.089 18:31:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:41.089 18:31:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.089 ************************************ 00:24:41.089 START TEST nvmf_auth_host 00:24:41.089 ************************************ 00:24:41.089 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:41.349 * Looking for test storage... 00:24:41.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:41.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.349 --rc genhtml_branch_coverage=1 00:24:41.349 --rc genhtml_function_coverage=1 00:24:41.349 --rc genhtml_legend=1 00:24:41.349 --rc geninfo_all_blocks=1 00:24:41.349 --rc geninfo_unexecuted_blocks=1 00:24:41.349 00:24:41.349 ' 00:24:41.349 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:41.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.350 --rc genhtml_branch_coverage=1 00:24:41.350 --rc genhtml_function_coverage=1 00:24:41.350 --rc genhtml_legend=1 00:24:41.350 --rc geninfo_all_blocks=1 00:24:41.350 --rc geninfo_unexecuted_blocks=1 00:24:41.350 00:24:41.350 ' 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:41.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.350 --rc genhtml_branch_coverage=1 00:24:41.350 --rc genhtml_function_coverage=1 00:24:41.350 --rc genhtml_legend=1 00:24:41.350 --rc geninfo_all_blocks=1 00:24:41.350 --rc geninfo_unexecuted_blocks=1 00:24:41.350 00:24:41.350 ' 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:41.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.350 --rc genhtml_branch_coverage=1 00:24:41.350 --rc genhtml_function_coverage=1 00:24:41.350 --rc genhtml_legend=1 00:24:41.350 --rc geninfo_all_blocks=1 00:24:41.350 --rc geninfo_unexecuted_blocks=1 00:24:41.350 00:24:41.350 ' 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:41.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:41.350 18:31:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:43.880 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:43.880 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:43.880 Found net devices under 0000:09:00.0: cvl_0_0 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.880 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:43.880 Found net devices under 0000:09:00.1: cvl_0_1 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.881 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.881 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:24:43.881 00:24:43.881 --- 10.0.0.2 ping statistics --- 00:24:43.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.881 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:43.881 00:24:43.881 --- 10.0.0.1 ping statistics --- 00:24:43.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.881 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1341130 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1341130 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1341130 ']' 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:43.881 18:32:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=37072bff09319e77a0bcb88e3b8fccfc 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.BdV 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 37072bff09319e77a0bcb88e3b8fccfc 0 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 37072bff09319e77a0bcb88e3b8fccfc 0 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=37072bff09319e77a0bcb88e3b8fccfc 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.BdV 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.BdV 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.BdV 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8e7010c545a067050bddc40a9a17ccbd96688f012b5e22ce5dc181d481969f39 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.YsR 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8e7010c545a067050bddc40a9a17ccbd96688f012b5e22ce5dc181d481969f39 3 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8e7010c545a067050bddc40a9a17ccbd96688f012b5e22ce5dc181d481969f39 3 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8e7010c545a067050bddc40a9a17ccbd96688f012b5e22ce5dc181d481969f39 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.YsR 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.YsR 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.YsR 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b8191e75cd09e51b816df6bacf8dd3e745ea0dba912ac218 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.LCj 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b8191e75cd09e51b816df6bacf8dd3e745ea0dba912ac218 0 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b8191e75cd09e51b816df6bacf8dd3e745ea0dba912ac218 0 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b8191e75cd09e51b816df6bacf8dd3e745ea0dba912ac218 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.LCj 00:24:44.139 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.LCj 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.LCj 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=82f19791cbdce7b83802b6d78d2d3ff51c10b11dd6d7a5a8 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Sru 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 82f19791cbdce7b83802b6d78d2d3ff51c10b11dd6d7a5a8 2 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 82f19791cbdce7b83802b6d78d2d3ff51c10b11dd6d7a5a8 2 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=82f19791cbdce7b83802b6d78d2d3ff51c10b11dd6d7a5a8 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:24:44.140 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Sru 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Sru 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Sru 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c528c387bb8e8bb039616e756a05cd48 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.xrM 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c528c387bb8e8bb039616e756a05cd48 1 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c528c387bb8e8bb039616e756a05cd48 1 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c528c387bb8e8bb039616e756a05cd48 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.xrM 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.xrM 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.xrM 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d31990e5398989147fd9b21a74f2e812 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.XIw 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d31990e5398989147fd9b21a74f2e812 1 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d31990e5398989147fd9b21a74f2e812 1 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d31990e5398989147fd9b21a74f2e812 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.XIw 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.XIw 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.XIw 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=aca413ea10796ce24c8e47dbf7492ad4d59379f01f477a40 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Teb 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key aca413ea10796ce24c8e47dbf7492ad4d59379f01f477a40 2 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 aca413ea10796ce24c8e47dbf7492ad4d59379f01f477a40 2 00:24:44.398 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=aca413ea10796ce24c8e47dbf7492ad4d59379f01f477a40 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Teb 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Teb 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Teb 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=77c8f600155286021f48a35adea59db9 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.KxD 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 77c8f600155286021f48a35adea59db9 0 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 77c8f600155286021f48a35adea59db9 0 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=77c8f600155286021f48a35adea59db9 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.KxD 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.KxD 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.KxD 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3821c8f33a83c3f8f557026b327adc8dd0315afc920382470dab149dd6d44709 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.9qq 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3821c8f33a83c3f8f557026b327adc8dd0315afc920382470dab149dd6d44709 3 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3821c8f33a83c3f8f557026b327adc8dd0315afc920382470dab149dd6d44709 3 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3821c8f33a83c3f8f557026b327adc8dd0315afc920382470dab149dd6d44709 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:24:44.399 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.9qq 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.9qq 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.9qq 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1341130 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1341130 ']' 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:44.657 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BdV 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.YsR ]] 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YsR 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.LCj 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Sru ]] 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Sru 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.915 18:32:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.xrM 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.XIw ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XIw 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Teb 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.KxD ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.KxD 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.9qq 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:44.915 18:32:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:45.849 Waiting for block devices as requested 00:24:45.849 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:46.106 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:46.106 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:46.106 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:46.364 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:46.364 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:46.364 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:46.364 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:46.622 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:24:46.622 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:46.880 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:46.880 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:46.880 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:46.880 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:47.137 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:47.137 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:47.137 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:47.703 No valid GPT data, bailing 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:47.703 18:32:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:24:47.962 00:24:47.962 Discovery Log Number of Records 2, Generation counter 2 00:24:47.962 =====Discovery Log Entry 0====== 00:24:47.962 trtype: tcp 00:24:47.962 adrfam: ipv4 00:24:47.962 subtype: current discovery subsystem 00:24:47.962 treq: not specified, sq flow control disable supported 00:24:47.962 portid: 1 00:24:47.962 trsvcid: 4420 00:24:47.962 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:47.962 traddr: 10.0.0.1 00:24:47.962 eflags: none 00:24:47.962 sectype: none 00:24:47.962 =====Discovery Log Entry 1====== 00:24:47.962 trtype: tcp 00:24:47.962 adrfam: ipv4 00:24:47.962 subtype: nvme subsystem 00:24:47.962 treq: not specified, sq flow control disable supported 00:24:47.962 portid: 1 00:24:47.962 trsvcid: 4420 00:24:47.962 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:47.962 traddr: 10.0.0.1 00:24:47.962 eflags: none 00:24:47.962 sectype: none 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.962 nvme0n1 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.962 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.963 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.221 nvme0n1 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:48.221 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.222 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.480 nvme0n1 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.480 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.738 nvme0n1 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.738 18:32:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.996 nvme0n1 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.996 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.255 nvme0n1 00:24:49.255 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.255 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.255 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.255 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.255 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.256 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.548 nvme0n1 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:49.548 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.549 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.830 nvme0n1 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.830 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.831 18:32:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.831 nvme0n1 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.089 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.346 nvme0n1 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.347 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.605 nvme0n1 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.605 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.863 nvme0n1 00:24:50.863 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.863 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.863 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.863 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.863 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.863 18:32:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.863 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.121 nvme0n1 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.121 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.379 nvme0n1 00:24:51.379 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.379 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.379 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.379 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.379 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.379 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:51.637 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:51.638 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.638 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.638 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.896 nvme0n1 00:24:51.896 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.896 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.896 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.896 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.896 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.896 18:32:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.896 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.153 nvme0n1 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:52.153 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.154 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.719 nvme0n1 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.719 18:32:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.285 nvme0n1 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.285 18:32:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.852 nvme0n1 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.852 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 nvme0n1 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.418 18:32:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.983 nvme0n1 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.983 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.984 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.917 nvme0n1 00:24:55.917 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.917 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.917 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.917 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.917 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.917 18:32:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.917 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.849 nvme0n1 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.849 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.850 18:32:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.783 nvme0n1 00:24:57.783 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.783 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.783 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.783 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.784 18:32:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.716 nvme0n1 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.716 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.717 18:32:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.650 nvme0n1 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.650 nvme0n1 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.650 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:24:59.909 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:24:59.910 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.910 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.910 18:32:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.910 nvme0n1 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.910 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.168 nvme0n1 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.168 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.426 nvme0n1 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.426 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.684 nvme0n1 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:00.684 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:00.685 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.685 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.685 18:32:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.943 nvme0n1 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.943 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.201 nvme0n1 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:01.201 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:01.202 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:01.202 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.202 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.460 nvme0n1 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.460 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.719 nvme0n1 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.719 18:32:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.977 nvme0n1 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:25:01.977 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.978 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.236 nvme0n1 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.236 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.494 nvme0n1 00:25:02.495 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.495 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.495 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.495 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.495 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.754 18:32:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.013 nvme0n1 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.013 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.272 nvme0n1 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.272 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.530 nvme0n1 00:25:03.530 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.530 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.530 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.530 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.530 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.530 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:03.788 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.789 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.789 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:03.789 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.789 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:03.789 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:03.789 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:03.789 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.789 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.789 18:32:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.355 nvme0n1 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.355 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.356 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.614 nvme0n1 00:25:04.614 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.614 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.614 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.614 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.614 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.614 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.872 18:32:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.437 nvme0n1 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:05.437 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:05.438 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.438 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.438 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:05.438 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.438 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:05.438 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:05.438 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:05.438 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.438 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.438 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.696 nvme0n1 00:25:05.696 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.696 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.696 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.696 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.696 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.696 18:32:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:05.954 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.520 nvme0n1 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.520 18:32:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.488 nvme0n1 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.488 18:32:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.421 nvme0n1 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.421 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.422 18:32:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.987 nvme0n1 00:25:08.987 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.987 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.987 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.987 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.987 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.987 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.245 18:32:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.180 nvme0n1 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.180 18:32:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.112 nvme0n1 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.112 nvme0n1 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.112 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.113 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.371 nvme0n1 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.371 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.629 nvme0n1 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.629 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.887 nvme0n1 00:25:11.887 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.887 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.887 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.887 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.887 18:32:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.887 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.888 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.146 nvme0n1 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:12.146 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:12.147 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.147 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.147 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.405 nvme0n1 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.405 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.663 nvme0n1 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.663 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.664 18:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.945 nvme0n1 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.945 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.218 nvme0n1 00:25:13.218 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.218 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.218 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.218 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.219 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.477 nvme0n1 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.477 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.735 nvme0n1 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.735 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.736 18:32:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.994 nvme0n1 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:13.994 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.252 nvme0n1 00:25:14.252 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.252 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.252 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.252 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.252 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.252 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.510 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.768 nvme0n1 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.768 18:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.026 nvme0n1 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.026 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.592 nvme0n1 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.592 18:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.158 nvme0n1 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:16.158 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.159 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:16.159 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:16.159 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:16.159 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.159 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.159 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.724 nvme0n1 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:25:16.724 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.725 18:32:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.291 nvme0n1 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.291 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.857 nvme0n1 00:25:17.857 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.857 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.857 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.857 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.857 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.857 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.857 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.857 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.857 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.857 18:32:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzcwNzJiZmYwOTMxOWU3N2EwYmNiODhlM2I4ZmNjZmOKUwax: 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: ]] 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU3MDEwYzU0NWEwNjcwNTBiZGRjNDBhOWExN2NjYmQ5NjY4OGYwMTJiNWUyMmNlNWRjMTgxZDQ4MTk2OWYzOXqhvNs=: 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.857 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.791 nvme0n1 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:18.791 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:18.792 18:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.726 nvme0n1 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.726 18:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.657 nvme0n1 00:25:20.657 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.657 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.657 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWNhNDEzZWExMDc5NmNlMjRjOGU0N2RiZjc0OTJhZDRkNTkzNzlmMDFmNDc3YTQwWKD+Zg==: 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: ]] 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzdjOGY2MDAxNTUyODYwMjFmNDhhMzVhZGVhNTlkYjmPUv6o: 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:20.658 18:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.591 nvme0n1 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.591 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzgyMWM4ZjMzYTgzYzNmOGY1NTcwMjZiMzI3YWRjOGRkMDMxNWFmYzkyMDM4MjQ3MGRhYjE0OWRkNmQ0NDcwOYcCwbM=: 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.592 18:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.525 nvme0n1 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.525 request: 00:25:22.525 { 00:25:22.525 "name": "nvme0", 00:25:22.525 "trtype": "tcp", 00:25:22.525 "traddr": "10.0.0.1", 00:25:22.525 "adrfam": "ipv4", 00:25:22.525 "trsvcid": "4420", 00:25:22.525 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:22.525 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:22.525 "prchk_reftag": false, 00:25:22.525 "prchk_guard": false, 00:25:22.525 "hdgst": false, 00:25:22.525 "ddgst": false, 00:25:22.525 "allow_unrecognized_csi": false, 00:25:22.525 "method": "bdev_nvme_attach_controller", 00:25:22.525 "req_id": 1 00:25:22.525 } 00:25:22.525 Got JSON-RPC error response 00:25:22.525 response: 00:25:22.525 { 00:25:22.525 "code": -5, 00:25:22.525 "message": "Input/output error" 00:25:22.525 } 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:22.525 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.526 request: 00:25:22.526 { 00:25:22.526 "name": "nvme0", 00:25:22.526 "trtype": "tcp", 00:25:22.526 "traddr": "10.0.0.1", 00:25:22.526 "adrfam": "ipv4", 00:25:22.526 "trsvcid": "4420", 00:25:22.526 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:22.526 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:22.526 "prchk_reftag": false, 00:25:22.526 "prchk_guard": false, 00:25:22.526 "hdgst": false, 00:25:22.526 "ddgst": false, 00:25:22.526 "dhchap_key": "key2", 00:25:22.526 "allow_unrecognized_csi": false, 00:25:22.526 "method": "bdev_nvme_attach_controller", 00:25:22.526 "req_id": 1 00:25:22.526 } 00:25:22.526 Got JSON-RPC error response 00:25:22.526 response: 00:25:22.526 { 00:25:22.526 "code": -5, 00:25:22.526 "message": "Input/output error" 00:25:22.526 } 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:22.526 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.784 request: 00:25:22.784 { 00:25:22.784 "name": "nvme0", 00:25:22.784 "trtype": "tcp", 00:25:22.784 "traddr": "10.0.0.1", 00:25:22.784 "adrfam": "ipv4", 00:25:22.784 "trsvcid": "4420", 00:25:22.784 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:22.784 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:22.784 "prchk_reftag": false, 00:25:22.784 "prchk_guard": false, 00:25:22.784 "hdgst": false, 00:25:22.784 "ddgst": false, 00:25:22.784 "dhchap_key": "key1", 00:25:22.784 "dhchap_ctrlr_key": "ckey2", 00:25:22.784 "allow_unrecognized_csi": false, 00:25:22.784 "method": "bdev_nvme_attach_controller", 00:25:22.784 "req_id": 1 00:25:22.784 } 00:25:22.784 Got JSON-RPC error response 00:25:22.784 response: 00:25:22.784 { 00:25:22.784 "code": -5, 00:25:22.784 "message": "Input/output error" 00:25:22.784 } 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.784 18:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.784 nvme0n1 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.784 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.043 request: 00:25:23.043 { 00:25:23.043 "name": "nvme0", 00:25:23.043 "dhchap_key": "key1", 00:25:23.043 "dhchap_ctrlr_key": "ckey2", 00:25:23.043 "method": "bdev_nvme_set_keys", 00:25:23.043 "req_id": 1 00:25:23.043 } 00:25:23.043 Got JSON-RPC error response 00:25:23.043 response: 00:25:23.043 { 00:25:23.043 "code": -13, 00:25:23.043 "message": "Permission denied" 00:25:23.043 } 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:23.043 18:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:24.415 18:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.415 18:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:24.415 18:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.415 18:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.415 18:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.415 18:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:24.415 18:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgxOTFlNzVjZDA5ZTUxYjgxNmRmNmJhY2Y4ZGQzZTc0NWVhMGRiYTkxMmFjMjE4w9zvOQ==: 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: ]] 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJmMTk3OTFjYmRjZTdiODM4MDJiNmQ3OGQyZDNmZjUxYzEwYjExZGQ2ZDdhNWE4GA44lQ==: 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.349 nvme0n1 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzUyOGMzODdiYjhlOGJiMDM5NjE2ZTc1NmEwNWNkNDgpZVkd: 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: ]] 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDMxOTkwZTUzOTg5ODkxNDdmZDliMjFhNzRmMmU4MTJsgKAE: 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.349 request: 00:25:25.349 { 00:25:25.349 "name": "nvme0", 00:25:25.349 "dhchap_key": "key2", 00:25:25.349 "dhchap_ctrlr_key": "ckey1", 00:25:25.349 "method": "bdev_nvme_set_keys", 00:25:25.349 "req_id": 1 00:25:25.349 } 00:25:25.349 Got JSON-RPC error response 00:25:25.349 response: 00:25:25.349 { 00:25:25.349 "code": -13, 00:25:25.349 "message": "Permission denied" 00:25:25.349 } 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:25.349 18:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.723 rmmod nvme_tcp 00:25:26.723 rmmod nvme_fabrics 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1341130 ']' 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1341130 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1341130 ']' 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1341130 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1341130 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1341130' 00:25:26.723 killing process with pid 1341130 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1341130 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1341130 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.723 18:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:25:29.253 18:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:30.188 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:30.188 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:30.188 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:30.188 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:30.188 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:30.188 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:30.188 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:30.188 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:30.188 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:30.188 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:30.188 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:30.188 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:30.188 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:30.188 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:30.188 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:30.188 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:31.120 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:25:31.378 18:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.BdV /tmp/spdk.key-null.LCj /tmp/spdk.key-sha256.xrM /tmp/spdk.key-sha384.Teb /tmp/spdk.key-sha512.9qq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:31.378 18:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:32.754 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:32.754 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:32.754 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:32.754 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:32.754 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:32.754 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:32.754 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:32.754 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:32.754 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:32.754 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:32.754 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:32.754 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:32.754 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:32.754 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:32.754 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:32.754 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:32.754 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:32.754 00:25:32.754 real 0m51.643s 00:25:32.754 user 0m49.166s 00:25:32.754 sys 0m6.361s 00:25:32.754 18:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:32.754 18:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.754 ************************************ 00:25:32.754 END TEST nvmf_auth_host 00:25:32.754 ************************************ 00:25:32.754 18:32:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:32.754 18:32:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:32.754 18:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:32.754 18:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:32.754 18:32:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.754 ************************************ 00:25:32.754 START TEST nvmf_digest 00:25:32.754 ************************************ 00:25:32.754 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:33.012 * Looking for test storage... 00:25:33.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:33.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.013 --rc genhtml_branch_coverage=1 00:25:33.013 --rc genhtml_function_coverage=1 00:25:33.013 --rc genhtml_legend=1 00:25:33.013 --rc geninfo_all_blocks=1 00:25:33.013 --rc geninfo_unexecuted_blocks=1 00:25:33.013 00:25:33.013 ' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:33.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.013 --rc genhtml_branch_coverage=1 00:25:33.013 --rc genhtml_function_coverage=1 00:25:33.013 --rc genhtml_legend=1 00:25:33.013 --rc geninfo_all_blocks=1 00:25:33.013 --rc geninfo_unexecuted_blocks=1 00:25:33.013 00:25:33.013 ' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:33.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.013 --rc genhtml_branch_coverage=1 00:25:33.013 --rc genhtml_function_coverage=1 00:25:33.013 --rc genhtml_legend=1 00:25:33.013 --rc geninfo_all_blocks=1 00:25:33.013 --rc geninfo_unexecuted_blocks=1 00:25:33.013 00:25:33.013 ' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:33.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.013 --rc genhtml_branch_coverage=1 00:25:33.013 --rc genhtml_function_coverage=1 00:25:33.013 --rc genhtml_legend=1 00:25:33.013 --rc geninfo_all_blocks=1 00:25:33.013 --rc geninfo_unexecuted_blocks=1 00:25:33.013 00:25:33.013 ' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:33.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.013 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:33.014 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:33.014 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:33.014 18:32:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:35.545 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:35.545 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.545 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:35.545 Found net devices under 0000:09:00.0: cvl_0_0 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:35.546 Found net devices under 0000:09:00.1: cvl_0_1 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:35.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:25:35.546 00:25:35.546 --- 10.0.0.2 ping statistics --- 00:25:35.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.546 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:25:35.546 00:25:35.546 --- 10.0.0.1 ping statistics --- 00:25:35.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.546 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:35.546 ************************************ 00:25:35.546 START TEST nvmf_digest_clean 00:25:35.546 ************************************ 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1350770 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1350770 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1350770 ']' 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:35.546 [2024-10-08 18:32:53.531209] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:25:35.546 [2024-10-08 18:32:53.531315] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.546 [2024-10-08 18:32:53.597013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.546 [2024-10-08 18:32:53.708005] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.546 [2024-10-08 18:32:53.708070] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.546 [2024-10-08 18:32:53.708098] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.546 [2024-10-08 18:32:53.708109] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.546 [2024-10-08 18:32:53.708119] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.546 [2024-10-08 18:32:53.708739] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.546 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:35.805 null0 00:25:35.805 [2024-10-08 18:32:53.905823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.805 [2024-10-08 18:32:53.930049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1350799 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1350799 /var/tmp/bperf.sock 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1350799 ']' 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:35.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:35.805 18:32:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:35.805 [2024-10-08 18:32:53.979061] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:25:35.805 [2024-10-08 18:32:53.979136] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1350799 ] 00:25:35.805 [2024-10-08 18:32:54.036731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.064 [2024-10-08 18:32:54.144984] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.064 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:36.064 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:36.064 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:36.064 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:36.064 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:36.630 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.630 18:32:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.888 nvme0n1 00:25:36.888 18:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:36.888 18:32:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:37.146 Running I/O for 2 seconds... 00:25:39.013 18266.00 IOPS, 71.35 MiB/s [2024-10-08T16:32:57.300Z] 18458.50 IOPS, 72.10 MiB/s 00:25:39.013 Latency(us) 00:25:39.013 [2024-10-08T16:32:57.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.013 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:39.013 nvme0n1 : 2.00 18464.89 72.13 0.00 0.00 6922.71 3665.16 22233.69 00:25:39.013 [2024-10-08T16:32:57.300Z] =================================================================================================================== 00:25:39.013 [2024-10-08T16:32:57.300Z] Total : 18464.89 72.13 0.00 0.00 6922.71 3665.16 22233.69 00:25:39.014 { 00:25:39.014 "results": [ 00:25:39.014 { 00:25:39.014 "job": "nvme0n1", 00:25:39.014 "core_mask": "0x2", 00:25:39.014 "workload": "randread", 00:25:39.014 "status": "finished", 00:25:39.014 "queue_depth": 128, 00:25:39.014 "io_size": 4096, 00:25:39.014 "runtime": 2.004615, 00:25:39.014 "iops": 18464.892261107496, 00:25:39.014 "mibps": 72.12848539495116, 00:25:39.014 "io_failed": 0, 00:25:39.014 "io_timeout": 0, 00:25:39.014 "avg_latency_us": 6922.712590951616, 00:25:39.014 "min_latency_us": 3665.1614814814816, 00:25:39.014 "max_latency_us": 22233.694814814815 00:25:39.014 } 00:25:39.014 ], 00:25:39.014 "core_count": 1 00:25:39.014 } 00:25:39.014 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:39.014 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:39.014 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:39.014 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:39.014 | select(.opcode=="crc32c") 00:25:39.014 | "\(.module_name) \(.executed)"' 00:25:39.014 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:39.579 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:39.579 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:39.579 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:39.579 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1350799 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1350799 ']' 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1350799 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1350799 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1350799' 00:25:39.580 killing process with pid 1350799 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1350799 00:25:39.580 Received shutdown signal, test time was about 2.000000 seconds 00:25:39.580 00:25:39.580 Latency(us) 00:25:39.580 [2024-10-08T16:32:57.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.580 [2024-10-08T16:32:57.867Z] =================================================================================================================== 00:25:39.580 [2024-10-08T16:32:57.867Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.580 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1350799 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1351324 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1351324 /var/tmp/bperf.sock 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1351324 ']' 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:39.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:39.838 18:32:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:39.838 [2024-10-08 18:32:57.931853] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:25:39.838 [2024-10-08 18:32:57.931929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351324 ] 00:25:39.838 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:39.838 Zero copy mechanism will not be used. 00:25:39.838 [2024-10-08 18:32:57.991049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.839 [2024-10-08 18:32:58.104874] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.097 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:40.097 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:40.097 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:40.097 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:40.097 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:40.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:40.354 18:32:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:40.955 nvme0n1 00:25:40.955 18:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:40.955 18:32:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:40.955 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:40.955 Zero copy mechanism will not be used. 00:25:40.955 Running I/O for 2 seconds... 00:25:43.273 5694.00 IOPS, 711.75 MiB/s [2024-10-08T16:33:01.560Z] 5767.50 IOPS, 720.94 MiB/s 00:25:43.273 Latency(us) 00:25:43.273 [2024-10-08T16:33:01.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.273 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:43.274 nvme0n1 : 2.00 5768.29 721.04 0.00 0.00 2769.58 749.42 11068.30 00:25:43.274 [2024-10-08T16:33:01.561Z] =================================================================================================================== 00:25:43.274 [2024-10-08T16:33:01.561Z] Total : 5768.29 721.04 0.00 0.00 2769.58 749.42 11068.30 00:25:43.274 { 00:25:43.274 "results": [ 00:25:43.274 { 00:25:43.274 "job": "nvme0n1", 00:25:43.274 "core_mask": "0x2", 00:25:43.274 "workload": "randread", 00:25:43.274 "status": "finished", 00:25:43.274 "queue_depth": 16, 00:25:43.274 "io_size": 131072, 00:25:43.274 "runtime": 2.0025, 00:25:43.274 "iops": 5768.289637952559, 00:25:43.274 "mibps": 721.0362047440699, 00:25:43.274 "io_failed": 0, 00:25:43.274 "io_timeout": 0, 00:25:43.274 "avg_latency_us": 2769.580091638691, 00:25:43.274 "min_latency_us": 749.4162962962963, 00:25:43.274 "max_latency_us": 11068.302222222223 00:25:43.274 } 00:25:43.274 ], 00:25:43.274 "core_count": 1 00:25:43.274 } 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:43.274 | select(.opcode=="crc32c") 00:25:43.274 | "\(.module_name) \(.executed)"' 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1351324 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1351324 ']' 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1351324 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1351324 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1351324' 00:25:43.274 killing process with pid 1351324 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1351324 00:25:43.274 Received shutdown signal, test time was about 2.000000 seconds 00:25:43.274 00:25:43.274 Latency(us) 00:25:43.274 [2024-10-08T16:33:01.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.274 [2024-10-08T16:33:01.561Z] =================================================================================================================== 00:25:43.274 [2024-10-08T16:33:01.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:43.274 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1351324 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1351833 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1351833 /var/tmp/bperf.sock 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1351833 ']' 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:43.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:43.532 18:33:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:43.790 [2024-10-08 18:33:01.857908] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:25:43.790 [2024-10-08 18:33:01.858000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351833 ] 00:25:43.790 [2024-10-08 18:33:01.921641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.790 [2024-10-08 18:33:02.034661] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.047 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:44.047 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:44.047 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:44.047 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:44.047 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:44.305 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:44.305 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:44.870 nvme0n1 00:25:44.870 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:44.870 18:33:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:44.870 Running I/O for 2 seconds... 00:25:47.177 19602.00 IOPS, 76.57 MiB/s [2024-10-08T16:33:05.464Z] 19493.00 IOPS, 76.14 MiB/s 00:25:47.177 Latency(us) 00:25:47.177 [2024-10-08T16:33:05.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.177 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:47.177 nvme0n1 : 2.01 19497.71 76.16 0.00 0.00 6551.29 2706.39 9320.68 00:25:47.177 [2024-10-08T16:33:05.464Z] =================================================================================================================== 00:25:47.177 [2024-10-08T16:33:05.464Z] Total : 19497.71 76.16 0.00 0.00 6551.29 2706.39 9320.68 00:25:47.177 { 00:25:47.177 "results": [ 00:25:47.177 { 00:25:47.177 "job": "nvme0n1", 00:25:47.177 "core_mask": "0x2", 00:25:47.177 "workload": "randwrite", 00:25:47.177 "status": "finished", 00:25:47.177 "queue_depth": 128, 00:25:47.177 "io_size": 4096, 00:25:47.177 "runtime": 2.006082, 00:25:47.177 "iops": 19497.707471578928, 00:25:47.177 "mibps": 76.16291981085519, 00:25:47.177 "io_failed": 0, 00:25:47.177 "io_timeout": 0, 00:25:47.177 "avg_latency_us": 6551.292912228075, 00:25:47.177 "min_latency_us": 2706.394074074074, 00:25:47.177 "max_latency_us": 9320.675555555556 00:25:47.177 } 00:25:47.177 ], 00:25:47.177 "core_count": 1 00:25:47.177 } 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:47.177 | select(.opcode=="crc32c") 00:25:47.177 | "\(.module_name) \(.executed)"' 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1351833 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1351833 ']' 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1351833 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1351833 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1351833' 00:25:47.177 killing process with pid 1351833 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1351833 00:25:47.177 Received shutdown signal, test time was about 2.000000 seconds 00:25:47.177 00:25:47.177 Latency(us) 00:25:47.177 [2024-10-08T16:33:05.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.177 [2024-10-08T16:33:05.464Z] =================================================================================================================== 00:25:47.177 [2024-10-08T16:33:05.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:47.177 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1351833 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1352372 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1352372 /var/tmp/bperf.sock 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1352372 ']' 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:47.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:47.436 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:47.436 [2024-10-08 18:33:05.701461] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:25:47.436 [2024-10-08 18:33:05.701538] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1352372 ] 00:25:47.436 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:47.436 Zero copy mechanism will not be used. 00:25:47.694 [2024-10-08 18:33:05.758941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.694 [2024-10-08 18:33:05.865512] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.694 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.694 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:47.694 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:47.694 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:47.694 18:33:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:48.259 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:48.259 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:48.518 nvme0n1 00:25:48.518 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:48.518 18:33:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:48.518 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:48.518 Zero copy mechanism will not be used. 00:25:48.518 Running I/O for 2 seconds... 00:25:50.832 5167.00 IOPS, 645.88 MiB/s [2024-10-08T16:33:09.119Z] 5116.00 IOPS, 639.50 MiB/s 00:25:50.832 Latency(us) 00:25:50.832 [2024-10-08T16:33:09.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.832 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:50.832 nvme0n1 : 2.00 5113.42 639.18 0.00 0.00 3121.32 2475.80 7524.50 00:25:50.832 [2024-10-08T16:33:09.119Z] =================================================================================================================== 00:25:50.832 [2024-10-08T16:33:09.119Z] Total : 5113.42 639.18 0.00 0.00 3121.32 2475.80 7524.50 00:25:50.832 { 00:25:50.832 "results": [ 00:25:50.832 { 00:25:50.832 "job": "nvme0n1", 00:25:50.832 "core_mask": "0x2", 00:25:50.832 "workload": "randwrite", 00:25:50.832 "status": "finished", 00:25:50.832 "queue_depth": 16, 00:25:50.832 "io_size": 131072, 00:25:50.832 "runtime": 2.00492, 00:25:50.832 "iops": 5113.420984378429, 00:25:50.832 "mibps": 639.1776230473037, 00:25:50.832 "io_failed": 0, 00:25:50.832 "io_timeout": 0, 00:25:50.832 "avg_latency_us": 3121.3211647230532, 00:25:50.832 "min_latency_us": 2475.8044444444445, 00:25:50.832 "max_latency_us": 7524.503703703704 00:25:50.832 } 00:25:50.832 ], 00:25:50.832 "core_count": 1 00:25:50.832 } 00:25:50.832 18:33:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:50.832 18:33:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:50.832 18:33:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:50.832 18:33:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:50.832 18:33:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:50.832 | select(.opcode=="crc32c") 00:25:50.832 | "\(.module_name) \(.executed)"' 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1352372 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1352372 ']' 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1352372 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1352372 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1352372' 00:25:50.832 killing process with pid 1352372 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1352372 00:25:50.832 Received shutdown signal, test time was about 2.000000 seconds 00:25:50.832 00:25:50.832 Latency(us) 00:25:50.832 [2024-10-08T16:33:09.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.832 [2024-10-08T16:33:09.119Z] =================================================================================================================== 00:25:50.832 [2024-10-08T16:33:09.119Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.832 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1352372 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1350770 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1350770 ']' 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1350770 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1350770 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1350770' 00:25:51.090 killing process with pid 1350770 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1350770 00:25:51.090 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1350770 00:25:51.348 00:25:51.348 real 0m16.145s 00:25:51.348 user 0m32.253s 00:25:51.348 sys 0m4.396s 00:25:51.348 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:51.348 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:51.348 ************************************ 00:25:51.348 END TEST nvmf_digest_clean 00:25:51.348 ************************************ 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:51.606 ************************************ 00:25:51.606 START TEST nvmf_digest_error 00:25:51.606 ************************************ 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1353317 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1353317 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1353317 ']' 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:51.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.606 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.606 [2024-10-08 18:33:09.733224] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:25:51.606 [2024-10-08 18:33:09.733351] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.606 [2024-10-08 18:33:09.798708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.864 [2024-10-08 18:33:09.907431] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.864 [2024-10-08 18:33:09.907476] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.864 [2024-10-08 18:33:09.907505] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.864 [2024-10-08 18:33:09.907517] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.864 [2024-10-08 18:33:09.907527] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.864 [2024-10-08 18:33:09.908113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.864 [2024-10-08 18:33:09.988749] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.864 18:33:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.864 null0 00:25:51.864 [2024-10-08 18:33:10.102690] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.864 [2024-10-08 18:33:10.126896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1353464 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1353464 /var/tmp/bperf.sock 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1353464 ']' 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:51.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:51.864 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.123 [2024-10-08 18:33:10.177913] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:25:52.123 [2024-10-08 18:33:10.177990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353464 ] 00:25:52.123 [2024-10-08 18:33:10.238161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.123 [2024-10-08 18:33:10.350455] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.380 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:52.380 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:52.380 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:52.380 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:52.638 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:52.638 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.638 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.638 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.638 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.638 18:33:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:53.204 nvme0n1 00:25:53.204 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:53.204 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.204 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:53.204 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.204 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:53.204 18:33:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:53.204 Running I/O for 2 seconds... 00:25:53.204 [2024-10-08 18:33:11.384096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.204 [2024-10-08 18:33:11.384155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.204 [2024-10-08 18:33:11.384174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.204 [2024-10-08 18:33:11.400205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.204 [2024-10-08 18:33:11.400235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.204 [2024-10-08 18:33:11.400266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.204 [2024-10-08 18:33:11.412705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.204 [2024-10-08 18:33:11.412748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.204 [2024-10-08 18:33:11.412764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.204 [2024-10-08 18:33:11.425232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.204 [2024-10-08 18:33:11.425278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.204 [2024-10-08 18:33:11.425304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.204 [2024-10-08 18:33:11.439049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.204 [2024-10-08 18:33:11.439080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.204 [2024-10-08 18:33:11.439112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.204 [2024-10-08 18:33:11.452144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.204 [2024-10-08 18:33:11.452189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.204 [2024-10-08 18:33:11.452205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.204 [2024-10-08 18:33:11.463294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.204 [2024-10-08 18:33:11.463323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.205 [2024-10-08 18:33:11.463356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.205 [2024-10-08 18:33:11.477925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.205 [2024-10-08 18:33:11.477955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.205 [2024-10-08 18:33:11.477972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.205 [2024-10-08 18:33:11.489336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.205 [2024-10-08 18:33:11.489373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.205 [2024-10-08 18:33:11.489406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.503921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.503949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.503980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.518129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.518160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.518193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.531217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.531248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.531265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.542827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.542855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.542886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.556649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.556677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.556708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.570040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.570069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.570101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.581931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.581959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.581991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.597138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.597182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.597198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.612025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.612070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.612085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.623345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.623374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.623391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.638702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.638731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.463 [2024-10-08 18:33:11.638762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.463 [2024-10-08 18:33:11.652205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.463 [2024-10-08 18:33:11.652236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.464 [2024-10-08 18:33:11.652252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.464 [2024-10-08 18:33:11.662398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.464 [2024-10-08 18:33:11.662427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.464 [2024-10-08 18:33:11.662459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.464 [2024-10-08 18:33:11.677907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.464 [2024-10-08 18:33:11.677935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.464 [2024-10-08 18:33:11.677967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.464 [2024-10-08 18:33:11.691212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.464 [2024-10-08 18:33:11.691243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.464 [2024-10-08 18:33:11.691260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.464 [2024-10-08 18:33:11.707165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.464 [2024-10-08 18:33:11.707194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.464 [2024-10-08 18:33:11.707226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.464 [2024-10-08 18:33:11.718042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.464 [2024-10-08 18:33:11.718070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.464 [2024-10-08 18:33:11.718108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.464 [2024-10-08 18:33:11.733397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.464 [2024-10-08 18:33:11.733427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.464 [2024-10-08 18:33:11.733459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.464 [2024-10-08 18:33:11.748556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.464 [2024-10-08 18:33:11.748587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.464 [2024-10-08 18:33:11.748605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.760823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.760852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.760884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.774323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.774355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.774372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.787964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.787996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.788028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.802216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.802248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.802265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.814823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.814854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.814871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.827423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.827454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.827472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.838866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.838893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.838924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.852989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.853033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.853050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.869756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.869785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.869816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.881347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.881378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.881395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.896025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.896054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.896084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.908921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.908949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.908981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.923454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.923485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.923503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.938756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.938785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.938817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.949963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.950007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.950029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.964785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.964815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.964847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.978021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.722 [2024-10-08 18:33:11.978052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.722 [2024-10-08 18:33:11.978070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.722 [2024-10-08 18:33:11.992061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.723 [2024-10-08 18:33:11.992092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.723 [2024-10-08 18:33:11.992109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.723 [2024-10-08 18:33:12.003748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.723 [2024-10-08 18:33:12.003777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.723 [2024-10-08 18:33:12.003810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.980 [2024-10-08 18:33:12.024084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.980 [2024-10-08 18:33:12.024118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.980 [2024-10-08 18:33:12.024136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.980 [2024-10-08 18:33:12.035249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.980 [2024-10-08 18:33:12.035280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.980 [2024-10-08 18:33:12.035327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.052374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.052406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.052424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.068894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.068925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.068958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.081023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.081062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.081080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.093937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.093979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.093996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.109785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.109831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.109856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.125198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.125226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.125258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.135527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.135556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.135588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.151852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.151881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.151911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.166211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.166243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.166261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.182675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.182704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.182736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.198608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.198639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.198657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.210859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.210890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.210907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.223850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.223881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.223899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.238311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.238354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.238372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.249926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.249954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.249986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.981 [2024-10-08 18:33:12.266212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:53.981 [2024-10-08 18:33:12.266241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.981 [2024-10-08 18:33:12.266273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.239 [2024-10-08 18:33:12.282255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.239 [2024-10-08 18:33:12.282293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.239 [2024-10-08 18:33:12.282314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.239 [2024-10-08 18:33:12.295800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.239 [2024-10-08 18:33:12.295831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.239 [2024-10-08 18:33:12.295847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.239 [2024-10-08 18:33:12.309830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.239 [2024-10-08 18:33:12.309861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.239 [2024-10-08 18:33:12.309878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.239 [2024-10-08 18:33:12.321140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.239 [2024-10-08 18:33:12.321170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.239 [2024-10-08 18:33:12.321195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.239 [2024-10-08 18:33:12.333635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.239 [2024-10-08 18:33:12.333682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.239 [2024-10-08 18:33:12.333700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.239 [2024-10-08 18:33:12.347334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.239 [2024-10-08 18:33:12.347364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.239 [2024-10-08 18:33:12.347381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.239 [2024-10-08 18:33:12.361122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.239 [2024-10-08 18:33:12.361153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.239 [2024-10-08 18:33:12.361172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 18422.00 IOPS, 71.96 MiB/s [2024-10-08T16:33:12.527Z] [2024-10-08 18:33:12.373772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.373801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.373833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.385026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.385054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.385086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.399095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.399126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.399143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.414160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.414192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.414209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.430622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.430653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.430670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.441895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.441923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.441954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.456022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.456053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.456070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.471654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.471685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.471702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.481877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.481905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.481937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.497336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.497364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.497396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.511510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.511538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.511569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.240 [2024-10-08 18:33:12.526326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.240 [2024-10-08 18:33:12.526368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.240 [2024-10-08 18:33:12.526385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.497 [2024-10-08 18:33:12.540886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.497 [2024-10-08 18:33:12.540916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.497 [2024-10-08 18:33:12.540932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.497 [2024-10-08 18:33:12.555829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.497 [2024-10-08 18:33:12.555860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.497 [2024-10-08 18:33:12.555885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.497 [2024-10-08 18:33:12.567394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.497 [2024-10-08 18:33:12.567423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.497 [2024-10-08 18:33:12.567454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.497 [2024-10-08 18:33:12.582086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.582114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.582145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.595315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.595360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.595377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.609896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.609925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.609941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.627249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.627279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.627321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.639750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.639777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.639808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.652040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.652084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.652099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.666880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.666908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.666939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.680407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.680442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.680475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.691339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.691369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.691387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.705529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.705560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.705577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.720616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.720659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.720674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.730763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.730790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.730820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.744437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.744465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.744497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.760535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.760577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.760592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.498 [2024-10-08 18:33:12.774893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.498 [2024-10-08 18:33:12.774924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.498 [2024-10-08 18:33:12.774941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.786379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.786408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.786439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.801567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.801611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.801626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.817351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.817381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.817413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.829146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.829173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.829202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.843864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.843895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.843912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.854794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.854820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.854852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.870411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.870441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.870457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.885462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.885490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.885521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.899794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.899825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.899842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.914655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.914682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.914717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.928490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.928520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.928538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.941839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.941881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.941898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.957494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.957541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.957558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.972921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.972952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.972969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:12.990282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:12.990318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:12.990350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:13.005924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:13.005955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:13.005972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:13.016811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:13.016839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:13.016869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.756 [2024-10-08 18:33:13.030754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:54.756 [2024-10-08 18:33:13.030784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.756 [2024-10-08 18:33:13.030800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.047315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.047353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.047385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.061721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.061748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.061779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.076911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.076942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.076959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.087596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.087627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.087643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.100739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.100766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.100797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.115666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.115693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.115724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.128982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.129011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.129027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.142586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.142631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.142648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.158186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.158217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.158239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.169554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.169598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.169614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.184656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.184685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.184701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.197821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.197852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.197869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.210085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.210116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.210134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.224388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.224416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.224447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.241083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.241112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.241143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.253742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.014 [2024-10-08 18:33:13.253770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.014 [2024-10-08 18:33:13.253801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.014 [2024-10-08 18:33:13.267351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.015 [2024-10-08 18:33:13.267381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.015 [2024-10-08 18:33:13.267413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.015 [2024-10-08 18:33:13.278822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.015 [2024-10-08 18:33:13.278855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.015 [2024-10-08 18:33:13.278887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.015 [2024-10-08 18:33:13.292852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.015 [2024-10-08 18:33:13.292895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.015 [2024-10-08 18:33:13.292911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.272 [2024-10-08 18:33:13.306285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.272 [2024-10-08 18:33:13.306335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.272 [2024-10-08 18:33:13.306352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.272 [2024-10-08 18:33:13.317896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.272 [2024-10-08 18:33:13.317925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.272 [2024-10-08 18:33:13.317955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.272 [2024-10-08 18:33:13.332415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.272 [2024-10-08 18:33:13.332444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.272 [2024-10-08 18:33:13.332476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.272 [2024-10-08 18:33:13.346057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.272 [2024-10-08 18:33:13.346086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.272 [2024-10-08 18:33:13.346102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.272 [2024-10-08 18:33:13.357672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.272 [2024-10-08 18:33:13.357699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.272 [2024-10-08 18:33:13.357730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.272 18439.50 IOPS, 72.03 MiB/s [2024-10-08T16:33:13.559Z] [2024-10-08 18:33:13.371604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22783f0) 00:25:55.272 [2024-10-08 18:33:13.371632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:55.272 [2024-10-08 18:33:13.371647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.272 00:25:55.272 Latency(us) 00:25:55.272 [2024-10-08T16:33:13.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.272 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:55.272 nvme0n1 : 2.01 18441.33 72.04 0.00 0.00 6934.22 3543.80 20680.25 00:25:55.272 [2024-10-08T16:33:13.559Z] =================================================================================================================== 00:25:55.272 [2024-10-08T16:33:13.559Z] Total : 18441.33 72.04 0.00 0.00 6934.22 3543.80 20680.25 00:25:55.272 { 00:25:55.272 "results": [ 00:25:55.272 { 00:25:55.272 "job": "nvme0n1", 00:25:55.272 "core_mask": "0x2", 00:25:55.272 "workload": "randread", 00:25:55.272 "status": "finished", 00:25:55.272 "queue_depth": 128, 00:25:55.272 "io_size": 4096, 00:25:55.272 "runtime": 2.006742, 00:25:55.272 "iops": 18441.334262202115, 00:25:55.272 "mibps": 72.03646196172701, 00:25:55.272 "io_failed": 0, 00:25:55.272 "io_timeout": 0, 00:25:55.272 "avg_latency_us": 6934.21769342937, 00:25:55.272 "min_latency_us": 3543.7985185185184, 00:25:55.272 "max_latency_us": 20680.248888888887 00:25:55.272 } 00:25:55.272 ], 00:25:55.272 "core_count": 1 00:25:55.272 } 00:25:55.272 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:55.272 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:55.272 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:55.272 | .driver_specific 00:25:55.272 | .nvme_error 00:25:55.272 | .status_code 00:25:55.272 | .command_transient_transport_error' 00:25:55.272 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1353464 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1353464 ']' 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1353464 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1353464 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1353464' 00:25:55.530 killing process with pid 1353464 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1353464 00:25:55.530 Received shutdown signal, test time was about 2.000000 seconds 00:25:55.530 00:25:55.530 Latency(us) 00:25:55.530 [2024-10-08T16:33:13.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.530 [2024-10-08T16:33:13.817Z] =================================================================================================================== 00:25:55.530 [2024-10-08T16:33:13.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.530 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1353464 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1353869 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1353869 /var/tmp/bperf.sock 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1353869 ']' 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:55.787 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:55.788 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:55.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:55.788 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:55.788 18:33:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.788 [2024-10-08 18:33:13.983456] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:25:55.788 [2024-10-08 18:33:13.983532] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1353869 ] 00:25:55.788 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:55.788 Zero copy mechanism will not be used. 00:25:55.788 [2024-10-08 18:33:14.040460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.045 [2024-10-08 18:33:14.150712] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.045 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:56.046 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:56.046 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:56.046 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:56.303 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:56.303 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.303 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.303 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.303 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.303 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.869 nvme0n1 00:25:56.869 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:56.869 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.869 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.869 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.869 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:56.869 18:33:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.869 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:56.869 Zero copy mechanism will not be used. 00:25:56.869 Running I/O for 2 seconds... 00:25:56.869 [2024-10-08 18:33:14.994019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:14.994083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:14.994103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:14.998447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:14.998479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:14.998496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.003850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.003881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.003914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.009497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.009528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.009546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.015601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.015632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.015650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.021719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.021751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.021768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.028490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.028520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.028552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.035464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.035495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.035513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.043487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.043542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.043561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.049864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.049912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.049928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.055964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.055995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.056027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.061840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.061870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.061902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.067544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.067590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.067607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.074402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.074433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.074450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.081545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.081577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.081608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.087385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.087416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.087433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.093172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.093203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.093220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.098070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.098100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.098116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.103068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.103097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.103114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.107918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.107947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.107963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.112824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.112853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.112883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.117522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.117552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.117568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.869 [2024-10-08 18:33:15.122212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.869 [2024-10-08 18:33:15.122240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.869 [2024-10-08 18:33:15.122274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.870 [2024-10-08 18:33:15.127015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.870 [2024-10-08 18:33:15.127044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.870 [2024-10-08 18:33:15.127061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.870 [2024-10-08 18:33:15.131828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.870 [2024-10-08 18:33:15.131856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.870 [2024-10-08 18:33:15.131872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.870 [2024-10-08 18:33:15.136656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.870 [2024-10-08 18:33:15.136686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.870 [2024-10-08 18:33:15.136708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.870 [2024-10-08 18:33:15.141662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.870 [2024-10-08 18:33:15.141691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.870 [2024-10-08 18:33:15.141706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.870 [2024-10-08 18:33:15.146402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.870 [2024-10-08 18:33:15.146432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.870 [2024-10-08 18:33:15.146449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.870 [2024-10-08 18:33:15.151161] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:56.870 [2024-10-08 18:33:15.151191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.870 [2024-10-08 18:33:15.151223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.129 [2024-10-08 18:33:15.156195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.129 [2024-10-08 18:33:15.156230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.129 [2024-10-08 18:33:15.156248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.129 [2024-10-08 18:33:15.161040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.129 [2024-10-08 18:33:15.161069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.129 [2024-10-08 18:33:15.161086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.129 [2024-10-08 18:33:15.165971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.129 [2024-10-08 18:33:15.166001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.129 [2024-10-08 18:33:15.166031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.129 [2024-10-08 18:33:15.170853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.129 [2024-10-08 18:33:15.170883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.129 [2024-10-08 18:33:15.170899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.129 [2024-10-08 18:33:15.175803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.129 [2024-10-08 18:33:15.175846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.129 [2024-10-08 18:33:15.175863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.129 [2024-10-08 18:33:15.180680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.129 [2024-10-08 18:33:15.180717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.129 [2024-10-08 18:33:15.180753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.129 [2024-10-08 18:33:15.185666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.129 [2024-10-08 18:33:15.185696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.185728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.190589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.190618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.190649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.195429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.195460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.195477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.200130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.200159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.200176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.204914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.204944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.204960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.209778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.209807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.209824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.214531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.214561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.214577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.219488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.219518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.219534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.224344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.224374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.224390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.229191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.229220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.229236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.234176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.234205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.234221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.239042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.239073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.239088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.243952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.243981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.243997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.248737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.248766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.248798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.253695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.253725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.253742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.258583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.258629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.258645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.263448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.263494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.263516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.268301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.268330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.268346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.273112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.273142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.273158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.278058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.278102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.278118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.283132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.283175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.283190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.287996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.288026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.288043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.292741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.292770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.292802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.297559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.297588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.297605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.302249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.302279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.302305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.307086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.307121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.307138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.311908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.311937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.311953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.316784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.316812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.316827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.130 [2024-10-08 18:33:15.321621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.130 [2024-10-08 18:33:15.321665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.130 [2024-10-08 18:33:15.321681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.326609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.326638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.326654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.331551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.331580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.331596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.334961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.334990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.335008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.337911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.337940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.337956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.341431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.341460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.341476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.344446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.344474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.344491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.348062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.348091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.348108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.351423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.351452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.351468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.354173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.354203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.354219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.358774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.358802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.358833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.363744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.363775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.363807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.368677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.368705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.368738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.373488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.373517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.373534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.378329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.378357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.378394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.383022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.383050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.383082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.387786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.387813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.387846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.392596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.392624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.392639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.397453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.397481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.397512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.402209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.402238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.402254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.407121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.407149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.407181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.131 [2024-10-08 18:33:15.411775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.131 [2024-10-08 18:33:15.411819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.131 [2024-10-08 18:33:15.411835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.416829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.416873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.416890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.421626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.421665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.421683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.426354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.426383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.426399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.431165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.431209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.431224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.436105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.436150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.436166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.440990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.441019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.441052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.446030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.446059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.446092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.450975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.451005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.451022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.455788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.455819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.455836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.460508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.460538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.460555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.465377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.465407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.465424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.470111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.470156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.470173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.474920] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.474965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.474981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.479938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.479967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.479999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.484900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.484931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.484948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.489615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.489645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.489661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.494343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.494373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.494389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.499269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.499309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.499331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.504409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.504439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.504463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.509436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.509466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.509482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.514861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.514894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.514925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.520036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.391 [2024-10-08 18:33:15.520066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.391 [2024-10-08 18:33:15.520099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.391 [2024-10-08 18:33:15.525037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.525082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.525098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.530515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.530545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.530563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.536065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.536095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.536127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.541840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.541868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.541900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.547214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.547244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.547296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.552619] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.552654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.552687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.558782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.558827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.558844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.564648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.564680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.564698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.570238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.570269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.570303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.577042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.577089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.577105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.582883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.582915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.582933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.588175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.588206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.588223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.593158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.593188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.593205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.599391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.599421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.599458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.606639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.606671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.606688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.613987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.614018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.614035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.621752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.621783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.621801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.629107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.629137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.629155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.634856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.634901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.634918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.640041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.640072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.640089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.644124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.644168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.644184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.650706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.650735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.650766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.656596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.392 [2024-10-08 18:33:15.656632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.392 [2024-10-08 18:33:15.656665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.392 [2024-10-08 18:33:15.661870] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.393 [2024-10-08 18:33:15.661899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.393 [2024-10-08 18:33:15.661930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.393 [2024-10-08 18:33:15.666991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.393 [2024-10-08 18:33:15.667020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.393 [2024-10-08 18:33:15.667052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.393 [2024-10-08 18:33:15.672357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.393 [2024-10-08 18:33:15.672388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.393 [2024-10-08 18:33:15.672405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.678326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.678359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.678376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.684026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.684056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.684087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.689574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.689603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.689634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.695361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.695391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.695423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.700615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.700646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.700664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.706343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.706375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.706393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.711607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.711637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.711670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.717069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.717101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.717118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.723038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.723069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.723086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.728893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.728924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.728941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.734448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.734479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.734496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.739967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.739998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.740015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.745219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.745249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.745266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.750096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.750126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.750152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.755498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.755528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.755545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.760577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.760607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.760624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.765344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.765389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.765405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.770193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.770223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.770239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.775044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.775073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.775090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.779820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.779850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.652 [2024-10-08 18:33:15.779867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.652 [2024-10-08 18:33:15.784661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.652 [2024-10-08 18:33:15.784690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.784707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.789865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.789897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.789914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.795117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.795153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.795170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.800029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.800059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.800076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.804884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.804914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.804931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.809742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.809771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.809788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.814533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.814562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.814579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.819353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.819382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.819398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.824181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.824210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.824226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.829031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.829061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.829077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.833841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.833871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.833887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.838553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.838583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.838600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.843364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.843393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.843409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.848093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.848122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.848139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.853051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.853080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.853096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.857857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.857888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.857905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.862558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.862588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.862605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.867376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.867406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.867423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.872148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.872177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.872195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.876817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.876846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.876868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.881562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.881591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.881607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.886327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.886356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.886372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.891218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.891247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.891264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.895963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.895993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.896010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.900745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.900774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.900790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.905527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.905556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.905572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.910220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.910249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.910264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.914929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.914959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.914975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.919607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.653 [2024-10-08 18:33:15.919641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.653 [2024-10-08 18:33:15.919658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.653 [2024-10-08 18:33:15.924584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.654 [2024-10-08 18:33:15.924612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-10-08 18:33:15.924629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.654 [2024-10-08 18:33:15.929405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.654 [2024-10-08 18:33:15.929434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-10-08 18:33:15.929451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.654 [2024-10-08 18:33:15.934195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.654 [2024-10-08 18:33:15.934225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.654 [2024-10-08 18:33:15.934242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.939168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.939199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.939215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.943973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.944003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.944019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.948813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.948843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.948859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.953618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.953647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.953663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.958406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.958435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.958452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.963248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.963277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.963303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.968143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.968173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.968189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.972992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.973021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.973037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.978117] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.978148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.978165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.982955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.982986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.983003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.913 6057.00 IOPS, 757.12 MiB/s [2024-10-08T16:33:16.200Z] [2024-10-08 18:33:15.989326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.989357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.989374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.994099] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.994144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.994161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:15.998914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:15.998943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:15.998960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.003744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.003773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.003796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.008517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.008547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.008563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.013193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.013222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.013238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.017836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.017867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.017883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.022702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.022731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.022747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.027706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.027735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.027751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.032488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.032518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.032534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.037221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.037250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.037266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.042094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.042123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.042139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.046864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.046893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.046909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.051611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.051640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.051656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.056492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.056522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.056538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.061467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.061503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.061520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.066417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.066449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.913 [2024-10-08 18:33:16.066467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.913 [2024-10-08 18:33:16.071738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.913 [2024-10-08 18:33:16.071769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.071786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.077107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.077138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.077156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.082137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.082182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.082199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.087195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.087225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.087247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.092148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.092185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.092201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.097174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.097204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.097221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.102109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.102138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.102155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.105520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.105550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.105566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.108869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.108903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.108921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.113275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.113314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.113333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.117210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.117240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.117256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.119896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.119926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.119942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.124243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.124279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.124305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.128214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.128245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.128261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.130960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.130989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.131005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.135893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.135921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.135950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.141420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.141464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.141479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.148266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.148305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.148324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.155459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.155489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.155521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.161150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.161195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.161212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.166850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.166895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.166912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.171815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.171844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.171860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.176820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.176849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.176880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.181535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.181578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.181594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.186435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.186478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.186493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.191160] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.191204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.191220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.914 [2024-10-08 18:33:16.196006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:57.914 [2024-10-08 18:33:16.196036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.914 [2024-10-08 18:33:16.196052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.200835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.200865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.200882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.205685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.205715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.205731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.210484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.210529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.210551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.215437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.215465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.215496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.220400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.220430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.220446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.225326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.225369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.225385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.230427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.230472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.230487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.235353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.235399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.235414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.241133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.241177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.241194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.246759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.246805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.246821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.252351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.252381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.252415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.258114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.258166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.258185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.263796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.263826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.263842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.269466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.269512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.269528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.275172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.275200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.275216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.280887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.280917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.280934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.286342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.286371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.286402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.292044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.292074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.292091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.297781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.297828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.297844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.303314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.303349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.303366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.308795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.308840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.173 [2024-10-08 18:33:16.308858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.173 [2024-10-08 18:33:16.314518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.173 [2024-10-08 18:33:16.314550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.314567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.322043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.322074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.322091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.328253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.328284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.328310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.333974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.334004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.334021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.339949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.339979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.339996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.345649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.345681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.345698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.351449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.351480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.351498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.355803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.355833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.355856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.360857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.360888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.360905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.367040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.367085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.367101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.373181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.373212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.373229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.379134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.379164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.379195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.385170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.385215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.385231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.391090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.391121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.391138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.396896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.396926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.396959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.404058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.404089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.404107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.411721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.411760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.411777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.418384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.418415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.418432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.425634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.425665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.425696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.433246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.433277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.433303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.440749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.440795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.440811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.446506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.446537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.446554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.451759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.451789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.451805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.174 [2024-10-08 18:33:16.457461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.174 [2024-10-08 18:33:16.457492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.174 [2024-10-08 18:33:16.457510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.433 [2024-10-08 18:33:16.463359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.433 [2024-10-08 18:33:16.463390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.433 [2024-10-08 18:33:16.463407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.433 [2024-10-08 18:33:16.469114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.433 [2024-10-08 18:33:16.469146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.433 [2024-10-08 18:33:16.469163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.433 [2024-10-08 18:33:16.474441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.433 [2024-10-08 18:33:16.474472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.433 [2024-10-08 18:33:16.474504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.433 [2024-10-08 18:33:16.480570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.433 [2024-10-08 18:33:16.480601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.433 [2024-10-08 18:33:16.480617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.433 [2024-10-08 18:33:16.488096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.433 [2024-10-08 18:33:16.488126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.433 [2024-10-08 18:33:16.488158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.433 [2024-10-08 18:33:16.494011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.433 [2024-10-08 18:33:16.494041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.433 [2024-10-08 18:33:16.494057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.433 [2024-10-08 18:33:16.499661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.433 [2024-10-08 18:33:16.499692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.433 [2024-10-08 18:33:16.499709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.505322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.505352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.505383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.511033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.511065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.511082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.517375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.517408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.517430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.523188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.523219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.523237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.528383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.528414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.528431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.534220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.534252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.534269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.540357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.540389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.540407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.547237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.547269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.547294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.552689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.552720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.552737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.557809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.557840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.557857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.563096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.563143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.563160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.568452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.568485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.568502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.573695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.573726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.573743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.578708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.578749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.578765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.582540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.582571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.582588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.586732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.586763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.586796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.591212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.591244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.591262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.596963] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.596995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.597028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.602624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.602655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.602671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.608215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.608244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.608281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.614219] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.614264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.614281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.620259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.620299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.620318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.626245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.626297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.626317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.631985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.632032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.632049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.637641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.637672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.637689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.643080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.643110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.643127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.648524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.648564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.648581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.654038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.654069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.434 [2024-10-08 18:33:16.654086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.434 [2024-10-08 18:33:16.659756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.434 [2024-10-08 18:33:16.659792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.659824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.665075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.665106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.665138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.670184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.670214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.670231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.674987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.675018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.675035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.679833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.679862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.679878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.684578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.684609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.684625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.689416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.689452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.689469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.694216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.694245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.694276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.699102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.699130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.699161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.703873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.703902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.703918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.708730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.708760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.708776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.713593] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.713622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.713638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.435 [2024-10-08 18:33:16.718509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.435 [2024-10-08 18:33:16.718540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.435 [2024-10-08 18:33:16.718557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.723359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.723401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.723416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.728254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.728283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.728325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.733132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.733161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.733177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.738194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.738223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.738239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.743136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.743180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.743203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.747986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.748029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.748045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.752909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.752952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.752968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.757728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.757759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.757776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.763080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.763112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.763143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.767961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.767993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.768010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.772953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.772985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.773001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.778019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.778047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.778081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.783149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.783180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.783212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.788614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.788651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.788669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.794232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.794263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.794281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.799665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.799696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.799712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.805030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.694 [2024-10-08 18:33:16.805060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.694 [2024-10-08 18:33:16.805077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.694 [2024-10-08 18:33:16.811064] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.811094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.811111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.816595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.816625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.816658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.822248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.822278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.822303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.828311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.828340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.828371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.834256] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.834293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.834313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.840122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.840153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.840170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.846057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.846102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.846120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.852824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.852856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.852873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.860527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.860559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.860576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.868415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.868447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.868465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.874756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.874788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.874805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.878929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.878960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.878977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.883551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.883582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.883600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.889267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.889319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.889343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.894996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.895026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.895042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.901862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.901909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.901926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.908455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.908486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.908517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.915859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.915903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.915919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.923438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.923468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.923484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.930906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.930938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.930970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.936931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.936977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.936993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.943043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.943091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.943107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.950304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.950338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.950371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.956797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.956845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.956861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.963153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.963185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.963217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.969467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.969496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.969512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.695 [2024-10-08 18:33:16.975555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.695 [2024-10-08 18:33:16.975587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.695 [2024-10-08 18:33:16.975604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.953 [2024-10-08 18:33:16.981336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.953 [2024-10-08 18:33:16.981369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.953 [2024-10-08 18:33:16.981385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.953 5856.00 IOPS, 732.00 MiB/s [2024-10-08T16:33:17.240Z] [2024-10-08 18:33:16.988755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x637260) 00:25:58.953 [2024-10-08 18:33:16.988783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.953 [2024-10-08 18:33:16.988816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.953 00:25:58.953 Latency(us) 00:25:58.953 [2024-10-08T16:33:17.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.953 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:58.953 nvme0n1 : 2.00 5856.59 732.07 0.00 0.00 2727.27 776.72 8786.68 00:25:58.953 [2024-10-08T16:33:17.240Z] =================================================================================================================== 00:25:58.953 [2024-10-08T16:33:17.240Z] Total : 5856.59 732.07 0.00 0.00 2727.27 776.72 8786.68 00:25:58.953 { 00:25:58.953 "results": [ 00:25:58.953 { 00:25:58.953 "job": "nvme0n1", 00:25:58.953 "core_mask": "0x2", 00:25:58.953 "workload": "randread", 00:25:58.953 "status": "finished", 00:25:58.953 "queue_depth": 16, 00:25:58.953 "io_size": 131072, 00:25:58.953 "runtime": 2.002529, 00:25:58.953 "iops": 5856.594336461544, 00:25:58.953 "mibps": 732.074292057693, 00:25:58.953 "io_failed": 0, 00:25:58.953 "io_timeout": 0, 00:25:58.953 "avg_latency_us": 2727.2735122025165, 00:25:58.953 "min_latency_us": 776.7229629629629, 00:25:58.953 "max_latency_us": 8786.678518518518 00:25:58.953 } 00:25:58.953 ], 00:25:58.953 "core_count": 1 00:25:58.953 } 00:25:58.953 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:58.953 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:58.953 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:58.953 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:58.953 | .driver_specific 00:25:58.953 | .nvme_error 00:25:58.953 | .status_code 00:25:58.953 | .command_transient_transport_error' 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 378 > 0 )) 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1353869 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1353869 ']' 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1353869 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1353869 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1353869' 00:25:59.211 killing process with pid 1353869 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1353869 00:25:59.211 Received shutdown signal, test time was about 2.000000 seconds 00:25:59.211 00:25:59.211 Latency(us) 00:25:59.211 [2024-10-08T16:33:17.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.211 [2024-10-08T16:33:17.498Z] =================================================================================================================== 00:25:59.211 [2024-10-08T16:33:17.498Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.211 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1353869 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1354280 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1354280 /var/tmp/bperf.sock 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1354280 ']' 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.469 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.469 [2024-10-08 18:33:17.606323] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:25:59.469 [2024-10-08 18:33:17.606400] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354280 ] 00:25:59.469 [2024-10-08 18:33:17.663522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.726 [2024-10-08 18:33:17.776078] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.726 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.726 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:59.726 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:59.726 18:33:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:59.983 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:59.983 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.983 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.983 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.983 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.983 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.546 nvme0n1 00:26:00.546 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:00.546 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.546 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:00.546 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.546 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:00.546 18:33:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.546 Running I/O for 2 seconds... 00:26:00.546 [2024-10-08 18:33:18.771157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ee5c8 00:26:00.546 [2024-10-08 18:33:18.772113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.546 [2024-10-08 18:33:18.772168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:00.546 [2024-10-08 18:33:18.782722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e2c28 00:26:00.546 [2024-10-08 18:33:18.783462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.546 [2024-10-08 18:33:18.783506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:00.546 [2024-10-08 18:33:18.794954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f9b30 00:26:00.546 [2024-10-08 18:33:18.796120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.546 [2024-10-08 18:33:18.796165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:00.546 [2024-10-08 18:33:18.807100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ec840 00:26:00.546 [2024-10-08 18:33:18.807786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.546 [2024-10-08 18:33:18.807831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:00.546 [2024-10-08 18:33:18.821926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e0ea0 00:26:00.546 [2024-10-08 18:33:18.823796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.546 [2024-10-08 18:33:18.823840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:00.546 [2024-10-08 18:33:18.830423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ed0b0 00:26:00.546 [2024-10-08 18:33:18.831309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.546 [2024-10-08 18:33:18.831364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:00.803 [2024-10-08 18:33:18.842644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ecc78 00:26:00.803 [2024-10-08 18:33:18.843536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.803 [2024-10-08 18:33:18.843579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:00.803 [2024-10-08 18:33:18.854743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f1ca0 00:26:00.803 [2024-10-08 18:33:18.855711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.803 [2024-10-08 18:33:18.855755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:00.803 [2024-10-08 18:33:18.867062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f3e60 00:26:00.803 [2024-10-08 18:33:18.867776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.803 [2024-10-08 18:33:18.867806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:00.803 [2024-10-08 18:33:18.880811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f57b0 00:26:00.803 [2024-10-08 18:33:18.882231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.803 [2024-10-08 18:33:18.882275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:00.803 [2024-10-08 18:33:18.892039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e23b8 00:26:00.803 [2024-10-08 18:33:18.893361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.803 [2024-10-08 18:33:18.893389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:00.803 [2024-10-08 18:33:18.903836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e27f0 00:26:00.803 [2024-10-08 18:33:18.905269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.803 [2024-10-08 18:33:18.905332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:00.803 [2024-10-08 18:33:18.915106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f7970 00:26:00.803 [2024-10-08 18:33:18.916282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.803 [2024-10-08 18:33:18.916320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:00.803 [2024-10-08 18:33:18.926981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fac10 00:26:00.803 [2024-10-08 18:33:18.927981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:18.928024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:18.938348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f1868 00:26:00.804 [2024-10-08 18:33:18.939229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:18.939271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:18.950637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f0bc0 00:26:00.804 [2024-10-08 18:33:18.951862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:18.951889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:18.962742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fd208 00:26:00.804 [2024-10-08 18:33:18.963836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:18.963879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:18.974934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e99d8 00:26:00.804 [2024-10-08 18:33:18.975808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:18.975852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:18.986144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f5378 00:26:00.804 [2024-10-08 18:33:18.986956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:18.987008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:19.000817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e9168 00:26:00.804 [2024-10-08 18:33:19.002682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:19.002724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:19.009410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f2948 00:26:00.804 [2024-10-08 18:33:19.010351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:19.010392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:19.021605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f4f40 00:26:00.804 [2024-10-08 18:33:19.022585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:19.022627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:19.035723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e49b0 00:26:00.804 [2024-10-08 18:33:19.037310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:19.037353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:19.047738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e7c50 00:26:00.804 [2024-10-08 18:33:19.049312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:19.049354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:19.058074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198eaef0 00:26:00.804 [2024-10-08 18:33:19.059834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:19.059861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:19.070102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f6020 00:26:00.804 [2024-10-08 18:33:19.071626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:19.071655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.804 [2024-10-08 18:33:19.081999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e0630 00:26:00.804 [2024-10-08 18:33:19.083369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.804 [2024-10-08 18:33:19.083420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.095075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ee5c8 00:26:01.062 [2024-10-08 18:33:19.096748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.096791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.107465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e88f8 00:26:01.062 [2024-10-08 18:33:19.109093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.109136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.118181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f2510 00:26:01.062 [2024-10-08 18:33:19.119567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.119595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.130128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fe2e8 00:26:01.062 [2024-10-08 18:33:19.131393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.131420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.142172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e0630 00:26:01.062 [2024-10-08 18:33:19.143429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.143471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.156311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f96f8 00:26:01.062 [2024-10-08 18:33:19.158239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.158283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.164851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e1b48 00:26:01.062 [2024-10-08 18:33:19.165928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.165971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.176998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e3d08 00:26:01.062 [2024-10-08 18:33:19.177708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.177738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.191204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f57b0 00:26:01.062 [2024-10-08 18:33:19.192843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.192887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.203334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ef6a8 00:26:01.062 [2024-10-08 18:33:19.204981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.205024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.211842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fc128 00:26:01.062 [2024-10-08 18:33:19.212736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.212777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.226315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e8088 00:26:01.062 [2024-10-08 18:33:19.227845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.227888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.238354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ed4e8 00:26:01.062 [2024-10-08 18:33:19.239901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.239945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.247043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e9e10 00:26:01.062 [2024-10-08 18:33:19.247889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.247932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.261310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e0a68 00:26:01.062 [2024-10-08 18:33:19.262771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.262798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.273321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f0ff8 00:26:01.062 [2024-10-08 18:33:19.274806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.274850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.284737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fc560 00:26:01.062 [2024-10-08 18:33:19.286049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.286078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.296514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e2c28 00:26:01.062 [2024-10-08 18:33:19.297674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.297721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.307803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e6738 00:26:01.062 [2024-10-08 18:33:19.308982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.309025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.319816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ed4e8 00:26:01.062 [2024-10-08 18:33:19.320514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.320557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.333430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e8d30 00:26:01.062 [2024-10-08 18:33:19.334914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.334957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.062 [2024-10-08 18:33:19.344616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f8e88 00:26:01.062 [2024-10-08 18:33:19.345996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.062 [2024-10-08 18:33:19.346041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.358162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fe720 00:26:01.320 [2024-10-08 18:33:19.359986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.360030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.366871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f7970 00:26:01.320 [2024-10-08 18:33:19.367906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.367948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.381271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198eaab8 00:26:01.320 [2024-10-08 18:33:19.382739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.382781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.392475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198eaab8 00:26:01.320 [2024-10-08 18:33:19.393867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.393910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.404198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e0ea0 00:26:01.320 [2024-10-08 18:33:19.405649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.405692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.415135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ed920 00:26:01.320 [2024-10-08 18:33:19.416395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.416423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.426742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f35f0 00:26:01.320 [2024-10-08 18:33:19.427992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.428035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.437870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e01f8 00:26:01.320 [2024-10-08 18:33:19.438622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.438666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.452340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e5220 00:26:01.320 [2024-10-08 18:33:19.454019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.454062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.460830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e3060 00:26:01.320 [2024-10-08 18:33:19.461598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.461625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.473386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fb480 00:26:01.320 [2024-10-08 18:33:19.474295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.474337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.485791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f2948 00:26:01.320 [2024-10-08 18:33:19.486861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.486903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:01.320 [2024-10-08 18:33:19.498215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f9b30 00:26:01.320 [2024-10-08 18:33:19.499444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.320 [2024-10-08 18:33:19.499486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:01.321 [2024-10-08 18:33:19.510336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ee190 00:26:01.321 [2024-10-08 18:33:19.511203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.321 [2024-10-08 18:33:19.511232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.321 [2024-10-08 18:33:19.521665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f4298 00:26:01.321 [2024-10-08 18:33:19.522846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.321 [2024-10-08 18:33:19.522874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:01.321 [2024-10-08 18:33:19.533226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198efae0 00:26:01.321 [2024-10-08 18:33:19.534406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.321 [2024-10-08 18:33:19.534448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:01.321 [2024-10-08 18:33:19.545430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f0bc0 00:26:01.321 [2024-10-08 18:33:19.546528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.321 [2024-10-08 18:33:19.546555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:01.321 [2024-10-08 18:33:19.557039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e6b70 00:26:01.321 [2024-10-08 18:33:19.557823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.321 [2024-10-08 18:33:19.557866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:01.321 [2024-10-08 18:33:19.571197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198eaef0 00:26:01.321 [2024-10-08 18:33:19.572910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.321 [2024-10-08 18:33:19.572953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:01.321 [2024-10-08 18:33:19.583233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ee190 00:26:01.321 [2024-10-08 18:33:19.585003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.321 [2024-10-08 18:33:19.585046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.321 [2024-10-08 18:33:19.591868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e7c50 00:26:01.321 [2024-10-08 18:33:19.592846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.321 [2024-10-08 18:33:19.592888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:01.321 [2024-10-08 18:33:19.606351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f2d80 00:26:01.579 [2024-10-08 18:33:19.607953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.608001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.615487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e8088 00:26:01.579 [2024-10-08 18:33:19.616364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.616406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.627716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e4578 00:26:01.579 [2024-10-08 18:33:19.628546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.628575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.642305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f2948 00:26:01.579 [2024-10-08 18:33:19.643825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.643868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.653616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fd208 00:26:01.579 [2024-10-08 18:33:19.655046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.655088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.664453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ee5c8 00:26:01.579 [2024-10-08 18:33:19.665670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.665699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.676064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ecc78 00:26:01.579 [2024-10-08 18:33:19.677228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.677270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.688082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e5658 00:26:01.579 [2024-10-08 18:33:19.688837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.688881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.702241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f2d80 00:26:01.579 [2024-10-08 18:33:19.704002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.704044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.710767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e1710 00:26:01.579 [2024-10-08 18:33:19.711484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.711531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.725695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e84c0 00:26:01.579 [2024-10-08 18:33:19.727198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.727241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.737055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e5220 00:26:01.579 [2024-10-08 18:33:19.738503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.738545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.749587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ebb98 00:26:01.579 [2024-10-08 18:33:19.751206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.751248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.761081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e99d8 00:26:01.579 21405.00 IOPS, 83.61 MiB/s [2024-10-08T16:33:19.866Z] [2024-10-08 18:33:19.762493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.762521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.772854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e99d8 00:26:01.579 [2024-10-08 18:33:19.774297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.774341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.785042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ef6a8 00:26:01.579 [2024-10-08 18:33:19.786495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.786536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.795971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198efae0 00:26:01.579 [2024-10-08 18:33:19.797334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.797363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.808042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e0a68 00:26:01.579 [2024-10-08 18:33:19.809138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.809180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.819169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fc560 00:26:01.579 [2024-10-08 18:33:19.820037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.820079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.830399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f3a28 00:26:01.579 [2024-10-08 18:33:19.831203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.831246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.842130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e73e0 00:26:01.579 [2024-10-08 18:33:19.843043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.843085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:01.579 [2024-10-08 18:33:19.854706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e5658 00:26:01.579 [2024-10-08 18:33:19.855797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.579 [2024-10-08 18:33:19.855840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:01.838 [2024-10-08 18:33:19.866847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e01f8 00:26:01.838 [2024-10-08 18:33:19.867605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-10-08 18:33:19.867635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:01.838 [2024-10-08 18:33:19.878925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ed920 00:26:01.838 [2024-10-08 18:33:19.879554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-10-08 18:33:19.879598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:01.838 [2024-10-08 18:33:19.891408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f6cc8 00:26:01.838 [2024-10-08 18:33:19.892241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-10-08 18:33:19.892283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:01.838 [2024-10-08 18:33:19.903709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ed0b0 00:26:01.838 [2024-10-08 18:33:19.904791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-10-08 18:33:19.904833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.838 [2024-10-08 18:33:19.915233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f6890 00:26:01.838 [2024-10-08 18:33:19.916618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-10-08 18:33:19.916646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:01.838 [2024-10-08 18:33:19.927284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f6cc8 00:26:01.838 [2024-10-08 18:33:19.928356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-10-08 18:33:19.928383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:01.838 [2024-10-08 18:33:19.939776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e99d8 00:26:01.838 [2024-10-08 18:33:19.941065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:19.941109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:19.952156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ec840 00:26:01.839 [2024-10-08 18:33:19.953709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:19.953752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:19.964330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e4140 00:26:01.839 [2024-10-08 18:33:19.966003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:19.966046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:19.974443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198dece0 00:26:01.839 [2024-10-08 18:33:19.975131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:19.975160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:19.988118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f2d80 00:26:01.839 [2024-10-08 18:33:19.989561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:19.989591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.000720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198edd58 00:26:01.839 [2024-10-08 18:33:20.002497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.002529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.013261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198dece0 00:26:01.839 [2024-10-08 18:33:20.015103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.015152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.021762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f0bc0 00:26:01.839 [2024-10-08 18:33:20.022669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.022707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.036129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f0788 00:26:01.839 [2024-10-08 18:33:20.037336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.037377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.047653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e1b48 00:26:01.839 [2024-10-08 18:33:20.048817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.048846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.060553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ec408 00:26:01.839 [2024-10-08 18:33:20.061956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.061999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.072939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198df988 00:26:01.839 [2024-10-08 18:33:20.073745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.073776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.084675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e0630 00:26:01.839 [2024-10-08 18:33:20.085855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.085883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.096553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f31b8 00:26:01.839 [2024-10-08 18:33:20.097810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.097838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.111410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e9e10 00:26:01.839 [2024-10-08 18:33:20.113352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.113382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:01.839 [2024-10-08 18:33:20.120348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e9e10 00:26:01.839 [2024-10-08 18:33:20.121407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-10-08 18:33:20.121435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:02.097 [2024-10-08 18:33:20.135772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f1ca0 00:26:02.097 [2024-10-08 18:33:20.137236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.097 [2024-10-08 18:33:20.137279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.097 [2024-10-08 18:33:20.145861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f57b0 00:26:02.097 [2024-10-08 18:33:20.146587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.097 [2024-10-08 18:33:20.146615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:02.097 [2024-10-08 18:33:20.158208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198edd58 00:26:02.097 [2024-10-08 18:33:20.159154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.097 [2024-10-08 18:33:20.159197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.097 [2024-10-08 18:33:20.169709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f7970 00:26:02.097 [2024-10-08 18:33:20.170469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.097 [2024-10-08 18:33:20.170498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.097 [2024-10-08 18:33:20.181502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f9f68 00:26:02.097 [2024-10-08 18:33:20.182450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.097 [2024-10-08 18:33:20.182478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:02.097 [2024-10-08 18:33:20.193689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e88f8 00:26:02.097 [2024-10-08 18:33:20.194538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.097 [2024-10-08 18:33:20.194565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:02.097 [2024-10-08 18:33:20.207978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e6300 00:26:02.097 [2024-10-08 18:33:20.209379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.209408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.219334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f2510 00:26:02.098 [2024-10-08 18:33:20.220819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.220862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.231787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f2948 00:26:02.098 [2024-10-08 18:33:20.232768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.232813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.243183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f7da8 00:26:02.098 [2024-10-08 18:33:20.244124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.244167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.255702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e4578 00:26:02.098 [2024-10-08 18:33:20.256704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.256748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.267249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f81e0 00:26:02.098 [2024-10-08 18:33:20.268729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.268758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.279472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f7538 00:26:02.098 [2024-10-08 18:33:20.280761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.280803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.291687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fc560 00:26:02.098 [2024-10-08 18:33:20.292488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.292517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.306157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ea680 00:26:02.098 [2024-10-08 18:33:20.308104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.308132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.314807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ef270 00:26:02.098 [2024-10-08 18:33:20.315727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.315767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.329261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198feb58 00:26:02.098 [2024-10-08 18:33:20.330916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.330960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.340483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ebfd0 00:26:02.098 [2024-10-08 18:33:20.341859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.341893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.351765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198eff18 00:26:02.098 [2024-10-08 18:33:20.352800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.352842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.363104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f57b0 00:26:02.098 [2024-10-08 18:33:20.364089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.364117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:02.098 [2024-10-08 18:33:20.377342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e99d8 00:26:02.098 [2024-10-08 18:33:20.378859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.098 [2024-10-08 18:33:20.378901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.388008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ddc00 00:26:02.356 [2024-10-08 18:33:20.388761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.388790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.400546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fac10 00:26:02.356 [2024-10-08 18:33:20.401372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.401416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.413025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e4de8 00:26:02.356 [2024-10-08 18:33:20.414031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.414074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.424245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e4578 00:26:02.356 [2024-10-08 18:33:20.425882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.425910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.434587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f9b30 00:26:02.356 [2024-10-08 18:33:20.435384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.435412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.447104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e95a0 00:26:02.356 [2024-10-08 18:33:20.448036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.448078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.459675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fac10 00:26:02.356 [2024-10-08 18:33:20.460805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.460846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.471844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ed920 00:26:02.356 [2024-10-08 18:33:20.472918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.472946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.483611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198dfdc0 00:26:02.356 [2024-10-08 18:33:20.484248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.484275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.496304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198df550 00:26:02.356 [2024-10-08 18:33:20.497117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.497145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.508757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fef90 00:26:02.356 [2024-10-08 18:33:20.509756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.509785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.520181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e0ea0 00:26:02.356 [2024-10-08 18:33:20.521543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.521571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.532140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f46d0 00:26:02.356 [2024-10-08 18:33:20.533193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.533236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.544574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e6fa8 00:26:02.356 [2024-10-08 18:33:20.545926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.545968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.554158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fa3a0 00:26:02.356 [2024-10-08 18:33:20.554947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.554989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.566709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e0a68 00:26:02.356 [2024-10-08 18:33:20.567657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.567685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.578924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fa7d8 00:26:02.356 [2024-10-08 18:33:20.579855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.579898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.593257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e4140 00:26:02.356 [2024-10-08 18:33:20.594680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.594723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.603199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fda78 00:26:02.356 [2024-10-08 18:33:20.603984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.604026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.615470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f7538 00:26:02.356 [2024-10-08 18:33:20.616535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.616578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.630018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e5ec8 00:26:02.356 [2024-10-08 18:33:20.631845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.356 [2024-10-08 18:33:20.631873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:02.356 [2024-10-08 18:33:20.642964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f7970 00:26:02.613 [2024-10-08 18:33:20.644963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.613 [2024-10-08 18:33:20.645006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:02.613 [2024-10-08 18:33:20.651749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198fa3a0 00:26:02.613 [2024-10-08 18:33:20.652540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.613 [2024-10-08 18:33:20.652587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.613 [2024-10-08 18:33:20.664175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198f8e88 00:26:02.613 [2024-10-08 18:33:20.665135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.613 [2024-10-08 18:33:20.665178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.613 [2024-10-08 18:33:20.676603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198eff18 00:26:02.613 [2024-10-08 18:33:20.677951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.613 [2024-10-08 18:33:20.677995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:02.613 [2024-10-08 18:33:20.689102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e6738 00:26:02.613 [2024-10-08 18:33:20.690539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.613 [2024-10-08 18:33:20.690566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:02.613 [2024-10-08 18:33:20.701218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198de470 00:26:02.613 [2024-10-08 18:33:20.702782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.613 [2024-10-08 18:33:20.702825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:02.613 [2024-10-08 18:33:20.712375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e9e10 00:26:02.613 [2024-10-08 18:33:20.713777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.613 [2024-10-08 18:33:20.713804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.613 [2024-10-08 18:33:20.724114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e99d8 00:26:02.613 [2024-10-08 18:33:20.725439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.614 [2024-10-08 18:33:20.725480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:02.614 [2024-10-08 18:33:20.736045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ec840 00:26:02.614 [2024-10-08 18:33:20.737125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.614 [2024-10-08 18:33:20.737167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.614 [2024-10-08 18:33:20.747127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198ee5c8 00:26:02.614 [2024-10-08 18:33:20.749101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.614 [2024-10-08 18:33:20.749130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.614 [2024-10-08 18:33:20.759844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b12d0) with pdu=0x2000198e27f0 00:26:02.614 [2024-10-08 18:33:20.760826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.614 [2024-10-08 18:33:20.760869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.614 21311.50 IOPS, 83.25 MiB/s 00:26:02.614 Latency(us) 00:26:02.614 [2024-10-08T16:33:20.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.614 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:02.614 nvme0n1 : 2.01 21321.77 83.29 0.00 0.00 5994.51 2669.99 15825.73 00:26:02.614 [2024-10-08T16:33:20.901Z] =================================================================================================================== 00:26:02.614 [2024-10-08T16:33:20.901Z] Total : 21321.77 83.29 0.00 0.00 5994.51 2669.99 15825.73 00:26:02.614 { 00:26:02.614 "results": [ 00:26:02.614 { 00:26:02.614 "job": "nvme0n1", 00:26:02.614 "core_mask": "0x2", 00:26:02.614 "workload": "randwrite", 00:26:02.614 "status": "finished", 00:26:02.614 "queue_depth": 128, 00:26:02.614 "io_size": 4096, 00:26:02.614 "runtime": 2.00504, 00:26:02.614 "iops": 21321.769141762757, 00:26:02.614 "mibps": 83.28816071001077, 00:26:02.614 "io_failed": 0, 00:26:02.614 "io_timeout": 0, 00:26:02.614 "avg_latency_us": 5994.506540942945, 00:26:02.614 "min_latency_us": 2669.9851851851854, 00:26:02.614 "max_latency_us": 15825.730370370371 00:26:02.614 } 00:26:02.614 ], 00:26:02.614 "core_count": 1 00:26:02.614 } 00:26:02.614 18:33:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:02.614 18:33:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:02.614 18:33:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:02.614 18:33:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:02.614 | .driver_specific 00:26:02.614 | .nvme_error 00:26:02.614 | .status_code 00:26:02.614 | .command_transient_transport_error' 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 167 > 0 )) 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1354280 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1354280 ']' 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1354280 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1354280 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1354280' 00:26:02.871 killing process with pid 1354280 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1354280 00:26:02.871 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.871 00:26:02.871 Latency(us) 00:26:02.871 [2024-10-08T16:33:21.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.871 [2024-10-08T16:33:21.158Z] =================================================================================================================== 00:26:02.871 [2024-10-08T16:33:21.158Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.871 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1354280 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1354812 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1354812 /var/tmp/bperf.sock 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1354812 ']' 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:03.128 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.128 [2024-10-08 18:33:21.385252] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:26:03.128 [2024-10-08 18:33:21.385346] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354812 ] 00:26:03.129 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.129 Zero copy mechanism will not be used. 00:26:03.386 [2024-10-08 18:33:21.443359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.386 [2024-10-08 18:33:21.552831] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.386 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:03.386 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:03.386 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:03.386 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:03.642 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:03.642 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.642 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.938 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.938 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.938 18:33:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.223 nvme0n1 00:26:04.223 18:33:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:04.223 18:33:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.223 18:33:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.223 18:33:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.223 18:33:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:04.223 18:33:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:04.223 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:04.223 Zero copy mechanism will not be used. 00:26:04.223 Running I/O for 2 seconds... 00:26:04.481 [2024-10-08 18:33:22.515089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.515455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.515496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.520788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.521090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.521119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.526205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.526520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.526550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.531622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.531933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.531961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.536934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.537224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.537252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.542186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.542494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.542523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.547521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.547825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.547852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.552833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.553113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.553140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.557950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.558230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.558257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.563197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.563562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.563591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.568603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.568891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.568919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.573964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.574371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.574400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.579590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.579891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.579918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.585631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.585938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.585966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.591270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.591582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.591611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.597144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.597482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.597520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.602677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.602978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.603007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.608229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.608544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.608576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.613953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.614233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.614261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.619773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.620156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.620185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.625158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.625523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.625566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.631128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.631450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.631480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.636533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.636836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.636865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.641856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.642248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.642302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.647223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.647577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.647608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.652519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.652802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.481 [2024-10-08 18:33:22.652830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.481 [2024-10-08 18:33:22.657677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.481 [2024-10-08 18:33:22.657958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.657987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.662890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.663179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.663208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.668894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.669177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.669206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.675343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.675628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.675657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.682952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.683297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.683327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.690158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.690449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.690478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.697032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.697352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.697382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.704465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.704778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.704807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.711510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.711828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.711857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.718386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.718670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.718700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.724689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.725001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.725031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.730807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.731087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.731115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.735896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.736176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.736205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.741064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.741353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.741381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.746226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.746515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.746544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.751442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.751736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.751771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.757526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.757851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.757879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.482 [2024-10-08 18:33:22.763905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.482 [2024-10-08 18:33:22.764180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.482 [2024-10-08 18:33:22.764213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.769955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.770403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.770445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.776438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.776622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.776653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.783239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.783535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.783566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.789359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.789686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.789714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.795750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.796097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.796126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.802346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.802543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.802573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.808749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.809072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.809100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.814722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.815030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.815058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.820934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.821282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.821319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.826583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.826897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.826926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.832061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.832354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.832383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.837211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.837537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.837566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.842915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.843199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.741 [2024-10-08 18:33:22.843228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.741 [2024-10-08 18:33:22.849253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.741 [2024-10-08 18:33:22.849543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.849572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.854668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.854960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.854995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.861027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.861348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.861377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.867269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.867605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.867632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.874646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.874937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.874965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.881528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.881616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.881643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.886584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.886860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.886888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.891551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.891827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.891855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.896635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.896914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.896941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.901701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.901976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.902004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.906746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.907019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.907048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.911615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.911885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.911914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.916444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.916723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.916750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.921394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.921672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.921700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.926223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.926495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.926524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.931198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.931500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.931529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.936645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.937021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.937064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.942930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.943246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.943296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.949658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.950048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.950075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.955697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.955949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.955978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.960819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.961070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.961098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.966158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.966417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.966446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.971403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.971671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.971700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.976454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.976707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.976735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.981716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.981989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.982018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.742 [2024-10-08 18:33:22.987842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.742 [2024-10-08 18:33:22.988198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.742 [2024-10-08 18:33:22.988227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.743 [2024-10-08 18:33:22.993973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.743 [2024-10-08 18:33:22.994228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.743 [2024-10-08 18:33:22.994257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.743 [2024-10-08 18:33:23.000806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.743 [2024-10-08 18:33:23.001063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.743 [2024-10-08 18:33:23.001099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.743 [2024-10-08 18:33:23.007874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.743 [2024-10-08 18:33:23.008143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.743 [2024-10-08 18:33:23.008186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.743 [2024-10-08 18:33:23.013362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.743 [2024-10-08 18:33:23.013719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.743 [2024-10-08 18:33:23.013748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.743 [2024-10-08 18:33:23.019531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.743 [2024-10-08 18:33:23.019782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.743 [2024-10-08 18:33:23.019812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.743 [2024-10-08 18:33:23.025608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:04.743 [2024-10-08 18:33:23.025948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.743 [2024-10-08 18:33:23.025977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.031889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.032243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.032273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.038632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.038962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.038991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.045451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.045705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.045735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.050722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.050989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.051016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.055808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.056068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.056097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.061427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.061680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.061709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.067229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.067487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.067517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.072881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.073135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.073164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.078546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.078828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.078858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.083957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.084210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.084240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.089409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.089675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.089718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.094481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.094733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.094762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.099799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.100080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.100109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.105249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.105510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.105540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.110607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.001 [2024-10-08 18:33:23.110872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.001 [2024-10-08 18:33:23.110900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.001 [2024-10-08 18:33:23.115795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.116049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.116078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.121126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.121385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.121413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.126755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.127016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.127044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.132767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.133018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.133046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.137985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.138250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.138277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.143370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.143622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.143650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.149183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.149447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.149483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.156184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.156486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.156515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.162650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.162922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.162951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.169662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.169942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.169971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.175683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.176033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.176062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.181855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.182162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.182191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.188057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.188316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.188347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.194158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.194434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.194463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.199487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.199860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.199888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.205744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.206006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.206035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.211225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.211485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.211514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.216280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.216543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.216571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.222425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.222678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.222707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.228451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.228718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.228760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.233802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.234054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.234097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.238854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.239120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.239148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.244512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.244775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.244804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.250575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.250842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.250886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.255758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.256025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.256054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.261137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.261431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.261460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.266676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.266929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.266972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.271663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.271916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.271946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.276902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.277151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.277180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.281697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.281947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.281975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.002 [2024-10-08 18:33:23.286573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.002 [2024-10-08 18:33:23.286878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.002 [2024-10-08 18:33:23.286907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.291619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.291874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.291902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.296535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.296788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.296824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.301487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.301740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.301768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.306482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.306732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.306761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.311976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.312233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.312261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.317967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.318371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.318399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.324147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.324424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.324453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.330348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.330616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.330644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.336253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.336531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.336561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.342253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.342552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.342581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.348201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.348487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.348518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.354298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.354579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.354608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.360399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.360657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.360686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.366954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.367304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.367342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.373415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.373668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.373697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.380606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.380886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.380915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.387102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.387363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.387393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.392475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.392756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.392786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.397406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.397659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.397688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.402255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.402522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.402551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.407256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.407517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.407546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.412208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.412471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.412500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.261 [2024-10-08 18:33:23.417033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.261 [2024-10-08 18:33:23.417293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.261 [2024-10-08 18:33:23.417322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.421790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.422041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.422069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.426592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.426843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.426871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.431346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.431597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.431626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.436269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.436535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.436563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.441123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.441381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.441415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.446594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.446849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.446878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.452508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.452773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.452802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.458858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.459140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.459169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.465201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.465463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.465492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.471905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.472264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.472301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.478423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.478771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.478800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.484386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.484708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.484736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.489760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.490010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.490039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.494593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.494852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.494882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.499796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.500047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.500076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.262 5388.00 IOPS, 673.50 MiB/s [2024-10-08T16:33:23.549Z] [2024-10-08 18:33:23.505781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.506032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.506061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.511217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.511494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.511524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.517223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.517520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.517548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.523814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.524190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.524219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.530160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.530419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.530448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.535809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.536062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.536106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.540843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.541095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.541124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.262 [2024-10-08 18:33:23.547314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.262 [2024-10-08 18:33:23.547585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.262 [2024-10-08 18:33:23.547614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.552313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.552642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.552671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.557232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.557485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.557515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.562272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.562535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.562564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.568379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.568642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.568671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.574447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.574856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.574885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.581575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.581842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.581871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.586579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.586857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.586886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.591725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.591992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.592027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.596782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.597044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.597073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.601903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.602155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.602183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.607740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.607998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.608041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.613896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.614228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.614256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.621111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.621375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.621405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.626260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.521 [2024-10-08 18:33:23.626549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.521 [2024-10-08 18:33:23.626595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.521 [2024-10-08 18:33:23.631531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.631820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.631848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.636935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.637205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.637233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.642389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.642670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.642697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.647985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.648261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.648295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.653078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.653377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.653405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.659164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.659571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.659600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.665377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.665656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.665684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.671970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.672362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.672406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.678634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.678990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.679018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.684304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.684566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.684594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.689457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.689718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.689750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.694671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.694946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.694974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.699839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.700116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.700145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.705943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.706239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.706266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.712101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.712392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.712422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.718207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.718533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.718576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.725521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.725801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.725828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.731572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.731840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.731868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.737296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.737562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.737605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.743197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.743506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.743534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.749108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.749380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.749409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.754309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.754562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.754590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.759172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.759455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.759483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.765111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.765373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.765402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.771124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.771438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.771468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.777185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.777490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.777518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.784222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.784613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.784642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.790064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.790353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.522 [2024-10-08 18:33:23.790384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.522 [2024-10-08 18:33:23.795181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.522 [2024-10-08 18:33:23.795466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.523 [2024-10-08 18:33:23.795495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.523 [2024-10-08 18:33:23.800545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.523 [2024-10-08 18:33:23.800811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.523 [2024-10-08 18:33:23.800840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.523 [2024-10-08 18:33:23.805349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.523 [2024-10-08 18:33:23.805513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.523 [2024-10-08 18:33:23.805541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.781 [2024-10-08 18:33:23.810546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.781 [2024-10-08 18:33:23.810837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.781 [2024-10-08 18:33:23.810864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.781 [2024-10-08 18:33:23.815980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.781 [2024-10-08 18:33:23.816243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.781 [2024-10-08 18:33:23.816292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.781 [2024-10-08 18:33:23.820920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.781 [2024-10-08 18:33:23.821196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.781 [2024-10-08 18:33:23.821240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.781 [2024-10-08 18:33:23.825919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.781 [2024-10-08 18:33:23.826191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.781 [2024-10-08 18:33:23.826219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.781 [2024-10-08 18:33:23.831416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.781 [2024-10-08 18:33:23.831712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.781 [2024-10-08 18:33:23.831740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.781 [2024-10-08 18:33:23.837956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.781 [2024-10-08 18:33:23.838247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.838280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.844215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.844582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.844609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.850824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.851193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.851235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.857521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.857825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.857852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.864480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.864845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.864872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.871668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.872057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.872099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.878236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.878535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.878563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.884242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.884516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.884545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.889575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.889850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.889878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.895724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.896020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.896048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.901902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.902212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.902240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.907900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.908184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.908212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.913810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.914128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.914155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.920010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.920311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.920340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.926769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.927149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.927190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.933321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.933600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.933629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.938433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.938695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.938723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.943643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.943934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.943961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.948814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.949090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.949118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.953886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.954151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.954179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.959173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.959461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.959490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.965346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.965621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.965649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.970377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.970630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.970659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.975138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.975397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.975426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.980696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.980998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.981026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.986119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.986392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.986421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.991465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.991761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.991794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:23.996779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:23.997072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.782 [2024-10-08 18:33:23.997099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.782 [2024-10-08 18:33:24.002882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.782 [2024-10-08 18:33:24.003161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.003189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.008268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.008549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.008577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.014160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.014461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.014490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.020294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.020607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.020636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.025405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.025683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.025725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.030434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.030714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.030741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.035471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.035765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.035792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.041351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.041724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.041766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.047583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.047860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.047889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.053341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.053654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.053697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.060196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.060469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.060498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.783 [2024-10-08 18:33:24.065529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:05.783 [2024-10-08 18:33:24.065784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.783 [2024-10-08 18:33:24.065811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.070712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.071012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.071042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.075865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.076136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.076163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.080804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.081075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.081102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.085789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.086057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.086085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.090778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.091050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.091078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.095758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.096029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.096056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.100787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.101074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.101102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.105603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.105878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.105905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.111208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.111486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.111515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.117269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.117638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.117666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.123485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.123761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.123788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.129666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.130037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.130065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.137009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.137309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.137344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.143534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.143838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.143865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.149891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.150155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.150183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.156919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.157351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.157377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.164379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.164660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.164690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.169697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.170002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.170031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.174607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.174873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.174902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.179426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.179691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.042 [2024-10-08 18:33:24.179719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.042 [2024-10-08 18:33:24.184254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.042 [2024-10-08 18:33:24.184553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.184596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.189341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.189641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.189669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.194596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.194861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.194889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.199605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.199880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.199908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.205135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.205421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.205449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.210794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.211113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.211140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.216418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.216687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.216715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.221877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.222150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.222177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.227353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.227620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.227662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.232802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.233091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.233119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.237840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.238119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.238147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.243689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.243938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.243967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.249660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.250024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.250052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.256403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.256683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.256711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.263396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.263663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.263692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.269471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.269781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.269810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.275653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.275993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.276022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.281818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.282123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.282152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.287935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.288229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.288262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.294839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.295220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.295248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.301779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.302142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.302171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.308781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.309082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.309111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.315760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.316072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.316101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.322093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.322412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.322441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.043 [2024-10-08 18:33:24.328179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.043 [2024-10-08 18:33:24.328469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.043 [2024-10-08 18:33:24.328499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.302 [2024-10-08 18:33:24.334445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.302 [2024-10-08 18:33:24.334727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.302 [2024-10-08 18:33:24.334755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.302 [2024-10-08 18:33:24.340506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.302 [2024-10-08 18:33:24.340786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.302 [2024-10-08 18:33:24.340814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.302 [2024-10-08 18:33:24.345360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.302 [2024-10-08 18:33:24.345650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.302 [2024-10-08 18:33:24.345678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.302 [2024-10-08 18:33:24.350349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.302 [2024-10-08 18:33:24.350628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.302 [2024-10-08 18:33:24.350656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.302 [2024-10-08 18:33:24.355332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.302 [2024-10-08 18:33:24.355614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.355643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.360451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.360743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.360771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.365529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.365811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.365840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.371761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.372068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.372097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.377412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.377681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.377710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.384502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.384821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.384864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.390115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.390386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.390416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.395170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.395456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.395484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.400380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.400660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.400688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.405691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.405970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.405998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.410876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.411158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.411185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.416100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.416373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.416402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.422106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.422476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.422505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.428476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.428868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.428911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.435313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.435581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.435610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.442082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.442373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.442407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.448651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.448939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.448966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.455467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.455732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.455761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.461583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.461862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.461890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.467281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.467588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.467615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.473429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.473834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.473863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.479694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.479977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.480005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.486318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.486643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.486672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.493373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.493643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.493671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.303 [2024-10-08 18:33:24.500203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.500487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.500516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.303 5361.00 IOPS, 670.12 MiB/s [2024-10-08T16:33:24.590Z] [2024-10-08 18:33:24.508008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b1610) with pdu=0x2000198fef90 00:26:06.303 [2024-10-08 18:33:24.508326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.303 [2024-10-08 18:33:24.508357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.303 00:26:06.303 Latency(us) 00:26:06.303 [2024-10-08T16:33:24.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.303 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:06.303 nvme0n1 : 2.01 5357.21 669.65 0.00 0.00 2979.20 2148.12 11165.39 00:26:06.303 [2024-10-08T16:33:24.590Z] =================================================================================================================== 00:26:06.303 [2024-10-08T16:33:24.590Z] Total : 5357.21 669.65 0.00 0.00 2979.20 2148.12 11165.39 00:26:06.303 { 00:26:06.303 "results": [ 00:26:06.303 { 00:26:06.303 "job": "nvme0n1", 00:26:06.303 "core_mask": "0x2", 00:26:06.303 "workload": "randwrite", 00:26:06.303 "status": "finished", 00:26:06.303 "queue_depth": 16, 00:26:06.303 "io_size": 131072, 00:26:06.303 "runtime": 2.005148, 00:26:06.303 "iops": 5357.21054006986, 00:26:06.303 "mibps": 669.6513175087325, 00:26:06.303 "io_failed": 0, 00:26:06.303 "io_timeout": 0, 00:26:06.303 "avg_latency_us": 2979.195238075536, 00:26:06.304 "min_latency_us": 2148.1244444444446, 00:26:06.304 "max_latency_us": 11165.392592592592 00:26:06.304 } 00:26:06.304 ], 00:26:06.304 "core_count": 1 00:26:06.304 } 00:26:06.304 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:06.304 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:06.304 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:06.304 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:06.304 | .driver_specific 00:26:06.304 | .nvme_error 00:26:06.304 | .status_code 00:26:06.304 | .command_transient_transport_error' 00:26:06.561 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 346 > 0 )) 00:26:06.561 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1354812 00:26:06.561 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1354812 ']' 00:26:06.561 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1354812 00:26:06.561 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:06.561 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.561 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1354812 00:26:06.819 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:06.819 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:06.819 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1354812' 00:26:06.819 killing process with pid 1354812 00:26:06.819 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1354812 00:26:06.819 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.819 00:26:06.819 Latency(us) 00:26:06.819 [2024-10-08T16:33:25.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.819 [2024-10-08T16:33:25.106Z] =================================================================================================================== 00:26:06.819 [2024-10-08T16:33:25.106Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.819 18:33:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1354812 00:26:07.076 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1353317 00:26:07.076 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1353317 ']' 00:26:07.076 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1353317 00:26:07.076 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:07.076 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.077 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1353317 00:26:07.077 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:07.077 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:07.077 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1353317' 00:26:07.077 killing process with pid 1353317 00:26:07.077 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1353317 00:26:07.077 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1353317 00:26:07.335 00:26:07.335 real 0m15.714s 00:26:07.335 user 0m31.443s 00:26:07.335 sys 0m4.252s 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.335 ************************************ 00:26:07.335 END TEST nvmf_digest_error 00:26:07.335 ************************************ 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:07.335 rmmod nvme_tcp 00:26:07.335 rmmod nvme_fabrics 00:26:07.335 rmmod nvme_keyring 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1353317 ']' 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1353317 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1353317 ']' 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1353317 00:26:07.335 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1353317) - No such process 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1353317 is not found' 00:26:07.335 Process with pid 1353317 is not found 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.335 18:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.234 18:33:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:09.493 00:26:09.493 real 0m36.488s 00:26:09.493 user 1m4.599s 00:26:09.493 sys 0m10.390s 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:09.493 ************************************ 00:26:09.493 END TEST nvmf_digest 00:26:09.493 ************************************ 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.493 ************************************ 00:26:09.493 START TEST nvmf_bdevperf 00:26:09.493 ************************************ 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:09.493 * Looking for test storage... 00:26:09.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:09.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.493 --rc genhtml_branch_coverage=1 00:26:09.493 --rc genhtml_function_coverage=1 00:26:09.493 --rc genhtml_legend=1 00:26:09.493 --rc geninfo_all_blocks=1 00:26:09.493 --rc geninfo_unexecuted_blocks=1 00:26:09.493 00:26:09.493 ' 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:09.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.493 --rc genhtml_branch_coverage=1 00:26:09.493 --rc genhtml_function_coverage=1 00:26:09.493 --rc genhtml_legend=1 00:26:09.493 --rc geninfo_all_blocks=1 00:26:09.493 --rc geninfo_unexecuted_blocks=1 00:26:09.493 00:26:09.493 ' 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:09.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.493 --rc genhtml_branch_coverage=1 00:26:09.493 --rc genhtml_function_coverage=1 00:26:09.493 --rc genhtml_legend=1 00:26:09.493 --rc geninfo_all_blocks=1 00:26:09.493 --rc geninfo_unexecuted_blocks=1 00:26:09.493 00:26:09.493 ' 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:09.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.493 --rc genhtml_branch_coverage=1 00:26:09.493 --rc genhtml_function_coverage=1 00:26:09.493 --rc genhtml_legend=1 00:26:09.493 --rc geninfo_all_blocks=1 00:26:09.493 --rc geninfo_unexecuted_blocks=1 00:26:09.493 00:26:09.493 ' 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:09.493 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:09.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:09.494 18:33:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.021 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:12.022 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:12.022 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:12.022 Found net devices under 0000:09:00.0: cvl_0_0 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:12.022 Found net devices under 0000:09:00.1: cvl_0_1 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.022 18:33:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:12.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:26:12.022 00:26:12.022 --- 10.0.0.2 ping statistics --- 00:26:12.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.022 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:26:12.022 00:26:12.022 --- 10.0.0.1 ping statistics --- 00:26:12.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.022 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1357168 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1357168 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1357168 ']' 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.022 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:12.023 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.023 [2024-10-08 18:33:30.218721] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:26:12.023 [2024-10-08 18:33:30.218805] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.023 [2024-10-08 18:33:30.287560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:12.280 [2024-10-08 18:33:30.402493] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.280 [2024-10-08 18:33:30.402543] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.280 [2024-10-08 18:33:30.402558] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.280 [2024-10-08 18:33:30.402569] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.280 [2024-10-08 18:33:30.402593] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.280 [2024-10-08 18:33:30.403459] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.280 [2024-10-08 18:33:30.403532] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:12.280 [2024-10-08 18:33:30.403536] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.280 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:12.280 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:12.280 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:12.280 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:12.280 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.280 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.280 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:12.280 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.280 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.280 [2024-10-08 18:33:30.544820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.538 Malloc0 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:12.538 [2024-10-08 18:33:30.612169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:12.538 { 00:26:12.538 "params": { 00:26:12.538 "name": "Nvme$subsystem", 00:26:12.538 "trtype": "$TEST_TRANSPORT", 00:26:12.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:12.538 "adrfam": "ipv4", 00:26:12.538 "trsvcid": "$NVMF_PORT", 00:26:12.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:12.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:12.538 "hdgst": ${hdgst:-false}, 00:26:12.538 "ddgst": ${ddgst:-false} 00:26:12.538 }, 00:26:12.538 "method": "bdev_nvme_attach_controller" 00:26:12.538 } 00:26:12.538 EOF 00:26:12.538 )") 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:26:12.538 18:33:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:12.538 "params": { 00:26:12.538 "name": "Nvme1", 00:26:12.538 "trtype": "tcp", 00:26:12.538 "traddr": "10.0.0.2", 00:26:12.538 "adrfam": "ipv4", 00:26:12.538 "trsvcid": "4420", 00:26:12.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:12.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:12.538 "hdgst": false, 00:26:12.538 "ddgst": false 00:26:12.538 }, 00:26:12.538 "method": "bdev_nvme_attach_controller" 00:26:12.538 }' 00:26:12.538 [2024-10-08 18:33:30.660246] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:26:12.538 [2024-10-08 18:33:30.660359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357323 ] 00:26:12.538 [2024-10-08 18:33:30.720111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.796 [2024-10-08 18:33:30.831821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.796 Running I/O for 1 seconds... 00:26:14.167 8260.00 IOPS, 32.27 MiB/s 00:26:14.167 Latency(us) 00:26:14.167 [2024-10-08T16:33:32.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.167 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:14.167 Verification LBA range: start 0x0 length 0x4000 00:26:14.167 Nvme1n1 : 1.01 8278.08 32.34 0.00 0.00 15400.82 3301.07 15243.19 00:26:14.167 [2024-10-08T16:33:32.454Z] =================================================================================================================== 00:26:14.167 [2024-10-08T16:33:32.454Z] Total : 8278.08 32.34 0.00 0.00 15400.82 3301.07 15243.19 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1357466 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:26:14.167 { 00:26:14.167 "params": { 00:26:14.167 "name": "Nvme$subsystem", 00:26:14.167 "trtype": "$TEST_TRANSPORT", 00:26:14.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.167 "adrfam": "ipv4", 00:26:14.167 "trsvcid": "$NVMF_PORT", 00:26:14.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.167 "hdgst": ${hdgst:-false}, 00:26:14.167 "ddgst": ${ddgst:-false} 00:26:14.167 }, 00:26:14.167 "method": "bdev_nvme_attach_controller" 00:26:14.167 } 00:26:14.167 EOF 00:26:14.167 )") 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:26:14.167 18:33:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:26:14.167 "params": { 00:26:14.167 "name": "Nvme1", 00:26:14.167 "trtype": "tcp", 00:26:14.167 "traddr": "10.0.0.2", 00:26:14.167 "adrfam": "ipv4", 00:26:14.167 "trsvcid": "4420", 00:26:14.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:14.167 "hdgst": false, 00:26:14.167 "ddgst": false 00:26:14.167 }, 00:26:14.167 "method": "bdev_nvme_attach_controller" 00:26:14.167 }' 00:26:14.167 [2024-10-08 18:33:32.386129] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:26:14.167 [2024-10-08 18:33:32.386204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357466 ] 00:26:14.167 [2024-10-08 18:33:32.445142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.425 [2024-10-08 18:33:32.555044] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.682 Running I/O for 15 seconds... 00:26:16.540 8278.00 IOPS, 32.34 MiB/s [2024-10-08T16:33:35.394Z] 8433.50 IOPS, 32.94 MiB/s [2024-10-08T16:33:35.394Z] 18:33:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1357168 00:26:17.107 18:33:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:17.107 [2024-10-08 18:33:35.356376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.107 [2024-10-08 18:33:35.356423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.107 [2024-10-08 18:33:35.356454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.107 [2024-10-08 18:33:35.356473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.107 [2024-10-08 18:33:35.356491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.107 [2024-10-08 18:33:35.356507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.107 [2024-10-08 18:33:35.356524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.107 [2024-10-08 18:33:35.356547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.107 [2024-10-08 18:33:35.356564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.107 [2024-10-08 18:33:35.356580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.107 [2024-10-08 18:33:35.356615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.107 [2024-10-08 18:33:35.356629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.107 [2024-10-08 18:33:35.356645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.107 [2024-10-08 18:33:35.356660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.107 [2024-10-08 18:33:35.356691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.107 [2024-10-08 18:33:35.356705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.107 [2024-10-08 18:33:35.356721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.107 [2024-10-08 18:33:35.356735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.107 [2024-10-08 18:33:35.356751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.356764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.356779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.356792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.356807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.356821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.356836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.356857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.356872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.356884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.356899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.356912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.356926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.108 [2024-10-08 18:33:35.356942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.356958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.356989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.357976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.108 [2024-10-08 18:33:35.357989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.108 [2024-10-08 18:33:35.358001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.109 [2024-10-08 18:33:35.358905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.358977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.358990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.359003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.359015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.359028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.359041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.359054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.359066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.359079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.359092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.359110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.359124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.359137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.359150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.109 [2024-10-08 18:33:35.359164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.109 [2024-10-08 18:33:35.359177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.110 [2024-10-08 18:33:35.359688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.110 [2024-10-08 18:33:35.359715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.110 [2024-10-08 18:33:35.359742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.110 [2024-10-08 18:33:35.359768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.110 [2024-10-08 18:33:35.359795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.110 [2024-10-08 18:33:35.359823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.110 [2024-10-08 18:33:35.359849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.359976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.359989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.360014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.360040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.360066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.360092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.360118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.360144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.360169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.360194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.360219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.110 [2024-10-08 18:33:35.360248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c202c0 is same with the state(6) to be set 00:26:17.110 [2024-10-08 18:33:35.360296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:17.110 [2024-10-08 18:33:35.360309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:17.110 [2024-10-08 18:33:35.360321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42056 len:8 PRP1 0x0 PRP2 0x0 00:26:17.110 [2024-10-08 18:33:35.360334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.110 [2024-10-08 18:33:35.360407] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c202c0 was disconnected and freed. reset controller. 00:26:17.110 [2024-10-08 18:33:35.363803] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.110 [2024-10-08 18:33:35.363880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.110 [2024-10-08 18:33:35.364527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.110 [2024-10-08 18:33:35.364557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.110 [2024-10-08 18:33:35.364573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.110 [2024-10-08 18:33:35.364818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.111 [2024-10-08 18:33:35.365012] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.111 [2024-10-08 18:33:35.365031] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.111 [2024-10-08 18:33:35.365045] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.111 [2024-10-08 18:33:35.368078] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.111 [2024-10-08 18:33:35.377362] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.111 [2024-10-08 18:33:35.377754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.111 [2024-10-08 18:33:35.377782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.111 [2024-10-08 18:33:35.377798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.111 [2024-10-08 18:33:35.378024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.111 [2024-10-08 18:33:35.378235] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.111 [2024-10-08 18:33:35.378254] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.111 [2024-10-08 18:33:35.378265] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.111 [2024-10-08 18:33:35.381313] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.111 [2024-10-08 18:33:35.390831] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.111 [2024-10-08 18:33:35.391186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.111 [2024-10-08 18:33:35.391238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.111 [2024-10-08 18:33:35.391254] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.111 [2024-10-08 18:33:35.391517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.111 [2024-10-08 18:33:35.391729] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.111 [2024-10-08 18:33:35.391748] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.111 [2024-10-08 18:33:35.391759] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.369 [2024-10-08 18:33:35.394980] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.369 [2024-10-08 18:33:35.404058] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.369 [2024-10-08 18:33:35.404477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.369 [2024-10-08 18:33:35.404505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.369 [2024-10-08 18:33:35.404521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.369 [2024-10-08 18:33:35.404767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.369 [2024-10-08 18:33:35.404976] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.369 [2024-10-08 18:33:35.404994] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.369 [2024-10-08 18:33:35.405006] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.369 [2024-10-08 18:33:35.408001] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.369 [2024-10-08 18:33:35.417045] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.369 [2024-10-08 18:33:35.417434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.369 [2024-10-08 18:33:35.417477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.369 [2024-10-08 18:33:35.417493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.369 [2024-10-08 18:33:35.417747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.369 [2024-10-08 18:33:35.417940] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.369 [2024-10-08 18:33:35.417958] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.369 [2024-10-08 18:33:35.417970] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.369 [2024-10-08 18:33:35.420796] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.369 [2024-10-08 18:33:35.430175] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.369 [2024-10-08 18:33:35.430566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.369 [2024-10-08 18:33:35.430595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.369 [2024-10-08 18:33:35.430611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.369 [2024-10-08 18:33:35.430852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.369 [2024-10-08 18:33:35.431066] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.369 [2024-10-08 18:33:35.431085] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.369 [2024-10-08 18:33:35.431096] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.369 [2024-10-08 18:33:35.434024] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.369 [2024-10-08 18:33:35.443294] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.443667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.443694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.443710] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.443930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.444138] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.370 [2024-10-08 18:33:35.444157] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.370 [2024-10-08 18:33:35.444168] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.370 [2024-10-08 18:33:35.447111] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.370 [2024-10-08 18:33:35.456404] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.456774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.456815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.456830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.457078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.457310] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.370 [2024-10-08 18:33:35.457330] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.370 [2024-10-08 18:33:35.457342] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.370 [2024-10-08 18:33:35.460242] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.370 [2024-10-08 18:33:35.469538] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.470032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.470074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.470091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.470368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.470573] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.370 [2024-10-08 18:33:35.470607] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.370 [2024-10-08 18:33:35.470620] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.370 [2024-10-08 18:33:35.473561] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.370 [2024-10-08 18:33:35.482585] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.482966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.482995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.483011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.483252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.483492] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.370 [2024-10-08 18:33:35.483513] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.370 [2024-10-08 18:33:35.483525] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.370 [2024-10-08 18:33:35.486428] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.370 [2024-10-08 18:33:35.495656] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.496022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.496064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.496079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.496336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.496557] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.370 [2024-10-08 18:33:35.496576] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.370 [2024-10-08 18:33:35.496589] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.370 [2024-10-08 18:33:35.499486] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.370 [2024-10-08 18:33:35.508865] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.509282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.509330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.509345] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.509601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.509809] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.370 [2024-10-08 18:33:35.509828] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.370 [2024-10-08 18:33:35.509839] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.370 [2024-10-08 18:33:35.512907] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.370 [2024-10-08 18:33:35.521995] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.522369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.522412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.522433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.522704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.522897] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.370 [2024-10-08 18:33:35.522915] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.370 [2024-10-08 18:33:35.522927] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.370 [2024-10-08 18:33:35.525910] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.370 [2024-10-08 18:33:35.535102] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.535494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.535521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.535536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.535752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.535962] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.370 [2024-10-08 18:33:35.535980] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.370 [2024-10-08 18:33:35.535992] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.370 [2024-10-08 18:33:35.538817] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.370 [2024-10-08 18:33:35.548199] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.548608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.548635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.548651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.548874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.549083] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.370 [2024-10-08 18:33:35.549102] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.370 [2024-10-08 18:33:35.549113] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.370 [2024-10-08 18:33:35.552037] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.370 [2024-10-08 18:33:35.561285] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.561659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.561702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.561718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.561970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.562178] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.370 [2024-10-08 18:33:35.562201] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.370 [2024-10-08 18:33:35.562214] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.370 [2024-10-08 18:33:35.565178] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.370 [2024-10-08 18:33:35.574349] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.370 [2024-10-08 18:33:35.574838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.370 [2024-10-08 18:33:35.574878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.370 [2024-10-08 18:33:35.574894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.370 [2024-10-08 18:33:35.575144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.370 [2024-10-08 18:33:35.575377] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.371 [2024-10-08 18:33:35.575399] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.371 [2024-10-08 18:33:35.575412] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.371 [2024-10-08 18:33:35.578317] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.371 [2024-10-08 18:33:35.587443] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.371 [2024-10-08 18:33:35.587745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.371 [2024-10-08 18:33:35.587785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.371 [2024-10-08 18:33:35.587801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.371 [2024-10-08 18:33:35.588001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.371 [2024-10-08 18:33:35.588225] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.371 [2024-10-08 18:33:35.588244] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.371 [2024-10-08 18:33:35.588256] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.371 [2024-10-08 18:33:35.591203] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.371 [2024-10-08 18:33:35.600500] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.371 [2024-10-08 18:33:35.600865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.371 [2024-10-08 18:33:35.600908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.371 [2024-10-08 18:33:35.600923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.371 [2024-10-08 18:33:35.601172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.371 [2024-10-08 18:33:35.601427] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.371 [2024-10-08 18:33:35.601448] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.371 [2024-10-08 18:33:35.601461] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.371 [2024-10-08 18:33:35.604422] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.371 [2024-10-08 18:33:35.613526] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.371 [2024-10-08 18:33:35.613907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.371 [2024-10-08 18:33:35.613934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.371 [2024-10-08 18:33:35.613949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.371 [2024-10-08 18:33:35.614168] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.371 [2024-10-08 18:33:35.614426] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.371 [2024-10-08 18:33:35.614447] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.371 [2024-10-08 18:33:35.614460] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.371 [2024-10-08 18:33:35.617463] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.371 [2024-10-08 18:33:35.626816] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.371 [2024-10-08 18:33:35.627173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.371 [2024-10-08 18:33:35.627201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.371 [2024-10-08 18:33:35.627218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.371 [2024-10-08 18:33:35.627569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.371 [2024-10-08 18:33:35.627807] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.371 [2024-10-08 18:33:35.627826] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.371 [2024-10-08 18:33:35.627840] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.371 [2024-10-08 18:33:35.630994] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.371 [2024-10-08 18:33:35.640214] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.371 [2024-10-08 18:33:35.640734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.371 [2024-10-08 18:33:35.640777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.371 [2024-10-08 18:33:35.640793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.371 [2024-10-08 18:33:35.641044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.371 [2024-10-08 18:33:35.641253] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.371 [2024-10-08 18:33:35.641271] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.371 [2024-10-08 18:33:35.641283] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.371 [2024-10-08 18:33:35.644431] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.371 [2024-10-08 18:33:35.653700] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.371 [2024-10-08 18:33:35.654083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.371 [2024-10-08 18:33:35.654152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.371 [2024-10-08 18:33:35.654169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.371 [2024-10-08 18:33:35.654418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.371 [2024-10-08 18:33:35.654644] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.371 [2024-10-08 18:33:35.654664] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.371 [2024-10-08 18:33:35.654676] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.629 [2024-10-08 18:33:35.657976] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.629 [2024-10-08 18:33:35.666882] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.629 [2024-10-08 18:33:35.667377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-10-08 18:33:35.667419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.629 [2024-10-08 18:33:35.667435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.629 [2024-10-08 18:33:35.667678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.629 [2024-10-08 18:33:35.667871] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.629 [2024-10-08 18:33:35.667889] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.629 [2024-10-08 18:33:35.667900] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.629 [2024-10-08 18:33:35.670835] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.629 [2024-10-08 18:33:35.680005] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.629 [2024-10-08 18:33:35.680393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.629 [2024-10-08 18:33:35.680436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.629 [2024-10-08 18:33:35.680453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.629 [2024-10-08 18:33:35.680729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.629 [2024-10-08 18:33:35.680923] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.629 [2024-10-08 18:33:35.680941] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.629 [2024-10-08 18:33:35.680953] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.629 [2024-10-08 18:33:35.683817] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.629 [2024-10-08 18:33:35.693230] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.629 [2024-10-08 18:33:35.693684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.693726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.693742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.693976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.694169] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.694187] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.630 [2024-10-08 18:33:35.694204] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.630 [2024-10-08 18:33:35.697148] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.630 [2024-10-08 18:33:35.706352] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.630 [2024-10-08 18:33:35.706683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.706710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.706725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.706948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.707172] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.707191] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.630 [2024-10-08 18:33:35.707204] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.630 [2024-10-08 18:33:35.710148] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.630 [2024-10-08 18:33:35.719445] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.630 [2024-10-08 18:33:35.719799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.719827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.719842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.720065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.720274] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.720317] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.630 [2024-10-08 18:33:35.720331] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.630 [2024-10-08 18:33:35.723233] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.630 [2024-10-08 18:33:35.732598] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.630 [2024-10-08 18:33:35.733096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.733136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.733153] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.733427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.733633] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.733652] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.630 [2024-10-08 18:33:35.733665] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.630 [2024-10-08 18:33:35.736568] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.630 [2024-10-08 18:33:35.745718] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.630 [2024-10-08 18:33:35.746210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.746251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.746267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.746531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.746757] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.746776] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.630 [2024-10-08 18:33:35.746788] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.630 [2024-10-08 18:33:35.749690] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.630 [2024-10-08 18:33:35.758877] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.630 [2024-10-08 18:33:35.759241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.759284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.759313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.759556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.759782] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.759801] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.630 [2024-10-08 18:33:35.759813] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.630 [2024-10-08 18:33:35.762712] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.630 [2024-10-08 18:33:35.771985] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.630 [2024-10-08 18:33:35.772273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.772305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.772321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.772515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.772722] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.772741] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.630 [2024-10-08 18:33:35.772753] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.630 [2024-10-08 18:33:35.775576] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.630 [2024-10-08 18:33:35.785009] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.630 [2024-10-08 18:33:35.785370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.785397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.785412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.785626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.785839] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.785857] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.630 [2024-10-08 18:33:35.785869] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.630 [2024-10-08 18:33:35.788705] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.630 7174.33 IOPS, 28.02 MiB/s [2024-10-08T16:33:35.917Z] [2024-10-08 18:33:35.798135] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.630 [2024-10-08 18:33:35.798503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.798545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.798561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.798808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.799001] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.799019] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.630 [2024-10-08 18:33:35.799031] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.630 [2024-10-08 18:33:35.801892] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.630 [2024-10-08 18:33:35.811153] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.630 [2024-10-08 18:33:35.811524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.811566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.811582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.811837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.812045] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.812064] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.630 [2024-10-08 18:33:35.812075] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.630 [2024-10-08 18:33:35.815092] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.630 [2024-10-08 18:33:35.824394] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.630 [2024-10-08 18:33:35.824775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.630 [2024-10-08 18:33:35.824801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.630 [2024-10-08 18:33:35.824816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.630 [2024-10-08 18:33:35.825051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.630 [2024-10-08 18:33:35.825260] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.630 [2024-10-08 18:33:35.825303] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.631 [2024-10-08 18:33:35.825321] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.631 [2024-10-08 18:33:35.828193] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.631 [2024-10-08 18:33:35.837616] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.631 [2024-10-08 18:33:35.838028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-10-08 18:33:35.838069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.631 [2024-10-08 18:33:35.838084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.631 [2024-10-08 18:33:35.838315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.631 [2024-10-08 18:33:35.838520] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.631 [2024-10-08 18:33:35.838540] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.631 [2024-10-08 18:33:35.838553] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.631 [2024-10-08 18:33:35.841475] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.631 [2024-10-08 18:33:35.850729] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.631 [2024-10-08 18:33:35.851059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-10-08 18:33:35.851086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.631 [2024-10-08 18:33:35.851102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.631 [2024-10-08 18:33:35.851332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.631 [2024-10-08 18:33:35.851547] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.631 [2024-10-08 18:33:35.851566] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.631 [2024-10-08 18:33:35.851578] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.631 [2024-10-08 18:33:35.854377] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.631 [2024-10-08 18:33:35.863748] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.631 [2024-10-08 18:33:35.864175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-10-08 18:33:35.864218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.631 [2024-10-08 18:33:35.864233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.631 [2024-10-08 18:33:35.864501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.631 [2024-10-08 18:33:35.864733] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.631 [2024-10-08 18:33:35.864754] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.631 [2024-10-08 18:33:35.864765] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.631 [2024-10-08 18:33:35.867742] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.631 [2024-10-08 18:33:35.877194] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.631 [2024-10-08 18:33:35.877621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-10-08 18:33:35.877669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.631 [2024-10-08 18:33:35.877685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.631 [2024-10-08 18:33:35.877925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.631 [2024-10-08 18:33:35.878135] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.631 [2024-10-08 18:33:35.878155] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.631 [2024-10-08 18:33:35.878167] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.631 [2024-10-08 18:33:35.881259] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.631 [2024-10-08 18:33:35.890462] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.631 [2024-10-08 18:33:35.890869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-10-08 18:33:35.890895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.631 [2024-10-08 18:33:35.890909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.631 [2024-10-08 18:33:35.891125] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.631 [2024-10-08 18:33:35.891361] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.631 [2024-10-08 18:33:35.891381] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.631 [2024-10-08 18:33:35.891394] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.631 [2024-10-08 18:33:35.894302] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.631 [2024-10-08 18:33:35.903735] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.631 [2024-10-08 18:33:35.904066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.631 [2024-10-08 18:33:35.904094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.631 [2024-10-08 18:33:35.904109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.631 [2024-10-08 18:33:35.904368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.631 [2024-10-08 18:33:35.904583] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.631 [2024-10-08 18:33:35.904602] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.631 [2024-10-08 18:33:35.904615] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.631 [2024-10-08 18:33:35.907688] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.889 [2024-10-08 18:33:35.917269] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.889 [2024-10-08 18:33:35.917663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.889 [2024-10-08 18:33:35.917706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.889 [2024-10-08 18:33:35.917722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.889 [2024-10-08 18:33:35.917951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.889 [2024-10-08 18:33:35.918152] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.889 [2024-10-08 18:33:35.918170] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.889 [2024-10-08 18:33:35.918182] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.889 [2024-10-08 18:33:35.921355] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.889 [2024-10-08 18:33:35.930941] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.889 [2024-10-08 18:33:35.931404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.889 [2024-10-08 18:33:35.931433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.889 [2024-10-08 18:33:35.931464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.889 [2024-10-08 18:33:35.931717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.889 [2024-10-08 18:33:35.931911] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.889 [2024-10-08 18:33:35.931929] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.889 [2024-10-08 18:33:35.931941] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.890 [2024-10-08 18:33:35.934878] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.890 [2024-10-08 18:33:35.944059] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.890 [2024-10-08 18:33:35.944382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.890 [2024-10-08 18:33:35.944410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.890 [2024-10-08 18:33:35.944425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.890 [2024-10-08 18:33:35.944632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.890 [2024-10-08 18:33:35.944863] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.890 [2024-10-08 18:33:35.944881] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.890 [2024-10-08 18:33:35.944893] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.890 [2024-10-08 18:33:35.947798] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.890 [2024-10-08 18:33:35.957152] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.890 [2024-10-08 18:33:35.957596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.890 [2024-10-08 18:33:35.957651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.890 [2024-10-08 18:33:35.957666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.890 [2024-10-08 18:33:35.957894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.890 [2024-10-08 18:33:35.958088] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.890 [2024-10-08 18:33:35.958107] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.890 [2024-10-08 18:33:35.958118] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.890 [2024-10-08 18:33:35.960948] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.890 [2024-10-08 18:33:35.970302] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.890 [2024-10-08 18:33:35.970643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.890 [2024-10-08 18:33:35.970707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.890 [2024-10-08 18:33:35.970722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.890 [2024-10-08 18:33:35.970951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.890 [2024-10-08 18:33:35.971145] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.890 [2024-10-08 18:33:35.971163] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.890 [2024-10-08 18:33:35.971175] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.890 [2024-10-08 18:33:35.974152] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.890 [2024-10-08 18:33:35.983465] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.890 [2024-10-08 18:33:35.983896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.890 [2024-10-08 18:33:35.983948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.890 [2024-10-08 18:33:35.983964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.890 [2024-10-08 18:33:35.984205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.890 [2024-10-08 18:33:35.984432] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.890 [2024-10-08 18:33:35.984453] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.890 [2024-10-08 18:33:35.984466] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.890 [2024-10-08 18:33:35.987382] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.890 [2024-10-08 18:33:35.996763] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.890 [2024-10-08 18:33:35.997097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.890 [2024-10-08 18:33:35.997124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.890 [2024-10-08 18:33:35.997139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.890 [2024-10-08 18:33:35.997372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.890 [2024-10-08 18:33:35.997588] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.890 [2024-10-08 18:33:35.997607] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.890 [2024-10-08 18:33:35.997619] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.890 [2024-10-08 18:33:36.000874] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.890 [2024-10-08 18:33:36.010454] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.890 [2024-10-08 18:33:36.010832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.890 [2024-10-08 18:33:36.010869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.890 [2024-10-08 18:33:36.010907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.890 [2024-10-08 18:33:36.011149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.890 [2024-10-08 18:33:36.011407] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.890 [2024-10-08 18:33:36.011430] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.890 [2024-10-08 18:33:36.011444] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.890 [2024-10-08 18:33:36.014486] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.890 [2024-10-08 18:33:36.023843] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.890 [2024-10-08 18:33:36.024206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.890 [2024-10-08 18:33:36.024248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.890 [2024-10-08 18:33:36.024265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.890 [2024-10-08 18:33:36.024505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.890 [2024-10-08 18:33:36.024737] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.890 [2024-10-08 18:33:36.024756] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.890 [2024-10-08 18:33:36.024768] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.890 [2024-10-08 18:33:36.027814] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.890 [2024-10-08 18:33:36.037199] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.890 [2024-10-08 18:33:36.037678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.890 [2024-10-08 18:33:36.037726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.890 [2024-10-08 18:33:36.037741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.890 [2024-10-08 18:33:36.038002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.890 [2024-10-08 18:33:36.038195] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.890 [2024-10-08 18:33:36.038213] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.890 [2024-10-08 18:33:36.038225] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.890 [2024-10-08 18:33:36.041204] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.890 [2024-10-08 18:33:36.050532] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.890 [2024-10-08 18:33:36.050940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.890 [2024-10-08 18:33:36.050977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.890 [2024-10-08 18:33:36.051010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.890 [2024-10-08 18:33:36.051272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.890 [2024-10-08 18:33:36.051503] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.890 [2024-10-08 18:33:36.051529] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.890 [2024-10-08 18:33:36.051542] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.890 [2024-10-08 18:33:36.054529] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.890 [2024-10-08 18:33:36.063812] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.890 [2024-10-08 18:33:36.064155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.890 [2024-10-08 18:33:36.064183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.890 [2024-10-08 18:33:36.064200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.890 [2024-10-08 18:33:36.064451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.890 [2024-10-08 18:33:36.064671] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.890 [2024-10-08 18:33:36.064690] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.890 [2024-10-08 18:33:36.064703] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.891 [2024-10-08 18:33:36.067832] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.891 [2024-10-08 18:33:36.077003] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.891 [2024-10-08 18:33:36.077373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.891 [2024-10-08 18:33:36.077415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.891 [2024-10-08 18:33:36.077431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.891 [2024-10-08 18:33:36.077679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.891 [2024-10-08 18:33:36.077878] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.891 [2024-10-08 18:33:36.077897] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.891 [2024-10-08 18:33:36.077909] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.891 [2024-10-08 18:33:36.080900] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.891 [2024-10-08 18:33:36.090337] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.891 [2024-10-08 18:33:36.090772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.891 [2024-10-08 18:33:36.090800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.891 [2024-10-08 18:33:36.090816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.891 [2024-10-08 18:33:36.091057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.891 [2024-10-08 18:33:36.091256] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.891 [2024-10-08 18:33:36.091299] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.891 [2024-10-08 18:33:36.091314] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.891 [2024-10-08 18:33:36.094352] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.891 [2024-10-08 18:33:36.103588] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.891 [2024-10-08 18:33:36.104036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.891 [2024-10-08 18:33:36.104079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.891 [2024-10-08 18:33:36.104095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.891 [2024-10-08 18:33:36.104360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.891 [2024-10-08 18:33:36.104566] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.891 [2024-10-08 18:33:36.104585] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.891 [2024-10-08 18:33:36.104612] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.891 [2024-10-08 18:33:36.107708] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.891 [2024-10-08 18:33:36.116949] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.891 [2024-10-08 18:33:36.117308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.891 [2024-10-08 18:33:36.117337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.891 [2024-10-08 18:33:36.117354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.891 [2024-10-08 18:33:36.117597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.891 [2024-10-08 18:33:36.117813] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.891 [2024-10-08 18:33:36.117833] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.891 [2024-10-08 18:33:36.117846] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.891 [2024-10-08 18:33:36.121065] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.891 [2024-10-08 18:33:36.130513] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.891 [2024-10-08 18:33:36.130864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.891 [2024-10-08 18:33:36.130891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.891 [2024-10-08 18:33:36.130907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.891 [2024-10-08 18:33:36.131129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.891 [2024-10-08 18:33:36.131370] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.891 [2024-10-08 18:33:36.131391] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.891 [2024-10-08 18:33:36.131404] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.891 [2024-10-08 18:33:36.134389] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.891 [2024-10-08 18:33:36.143846] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.891 [2024-10-08 18:33:36.144353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.891 [2024-10-08 18:33:36.144381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.891 [2024-10-08 18:33:36.144413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.891 [2024-10-08 18:33:36.144671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.891 [2024-10-08 18:33:36.144870] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.891 [2024-10-08 18:33:36.144889] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.891 [2024-10-08 18:33:36.144902] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.891 [2024-10-08 18:33:36.147885] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.891 [2024-10-08 18:33:36.157113] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.891 [2024-10-08 18:33:36.157510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.891 [2024-10-08 18:33:36.157538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.891 [2024-10-08 18:33:36.157554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.891 [2024-10-08 18:33:36.157795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.891 [2024-10-08 18:33:36.158013] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.891 [2024-10-08 18:33:36.158032] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.891 [2024-10-08 18:33:36.158045] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.891 [2024-10-08 18:33:36.161048] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.891 [2024-10-08 18:33:36.170344] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:17.891 [2024-10-08 18:33:36.170768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.891 [2024-10-08 18:33:36.170796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:17.891 [2024-10-08 18:33:36.170812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:17.891 [2024-10-08 18:33:36.171054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:17.891 [2024-10-08 18:33:36.171283] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:17.891 [2024-10-08 18:33:36.171312] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:17.891 [2024-10-08 18:33:36.171325] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:17.891 [2024-10-08 18:33:36.174536] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.150 [2024-10-08 18:33:36.183736] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.150 [2024-10-08 18:33:36.184108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.150 [2024-10-08 18:33:36.184135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.150 [2024-10-08 18:33:36.184151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.150 [2024-10-08 18:33:36.184396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.150 [2024-10-08 18:33:36.184625] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.150 [2024-10-08 18:33:36.184645] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.150 [2024-10-08 18:33:36.184663] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.150 [2024-10-08 18:33:36.187712] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.150 [2024-10-08 18:33:36.197043] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.150 [2024-10-08 18:33:36.197434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.150 [2024-10-08 18:33:36.197463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.150 [2024-10-08 18:33:36.197479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.150 [2024-10-08 18:33:36.197720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.150 [2024-10-08 18:33:36.197919] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.150 [2024-10-08 18:33:36.197938] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.150 [2024-10-08 18:33:36.197950] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.150 [2024-10-08 18:33:36.200932] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.150 [2024-10-08 18:33:36.210390] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.150 [2024-10-08 18:33:36.210786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.150 [2024-10-08 18:33:36.210829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.150 [2024-10-08 18:33:36.210845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.150 [2024-10-08 18:33:36.211114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.150 [2024-10-08 18:33:36.211343] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.150 [2024-10-08 18:33:36.211365] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.150 [2024-10-08 18:33:36.211378] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.150 [2024-10-08 18:33:36.214382] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.150 [2024-10-08 18:33:36.223595] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.150 [2024-10-08 18:33:36.223966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.150 [2024-10-08 18:33:36.224009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.150 [2024-10-08 18:33:36.224025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.150 [2024-10-08 18:33:36.224281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.150 [2024-10-08 18:33:36.224518] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.150 [2024-10-08 18:33:36.224538] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.150 [2024-10-08 18:33:36.224552] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.150 [2024-10-08 18:33:36.227535] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.150 [2024-10-08 18:33:36.236783] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.150 [2024-10-08 18:33:36.237193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.237221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.237237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.237476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.237711] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.151 [2024-10-08 18:33:36.237730] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.151 [2024-10-08 18:33:36.237742] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.151 [2024-10-08 18:33:36.240729] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.151 [2024-10-08 18:33:36.250001] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.151 [2024-10-08 18:33:36.250414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.250443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.250459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.250701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.250900] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.151 [2024-10-08 18:33:36.250919] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.151 [2024-10-08 18:33:36.250931] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.151 [2024-10-08 18:33:36.253941] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.151 [2024-10-08 18:33:36.263261] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.151 [2024-10-08 18:33:36.263653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.263682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.263698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.263926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.264142] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.151 [2024-10-08 18:33:36.264162] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.151 [2024-10-08 18:33:36.264175] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.151 [2024-10-08 18:33:36.267197] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.151 [2024-10-08 18:33:36.276557] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.151 [2024-10-08 18:33:36.277015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.277058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.277074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.277339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.277549] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.151 [2024-10-08 18:33:36.277569] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.151 [2024-10-08 18:33:36.277596] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.151 [2024-10-08 18:33:36.280562] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.151 [2024-10-08 18:33:36.289873] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.151 [2024-10-08 18:33:36.290374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.290417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.290434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.290684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.290902] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.151 [2024-10-08 18:33:36.290922] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.151 [2024-10-08 18:33:36.290934] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.151 [2024-10-08 18:33:36.293969] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.151 [2024-10-08 18:33:36.303067] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.151 [2024-10-08 18:33:36.303431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.303475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.303491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.303759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.303958] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.151 [2024-10-08 18:33:36.303977] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.151 [2024-10-08 18:33:36.303989] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.151 [2024-10-08 18:33:36.307008] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.151 [2024-10-08 18:33:36.316336] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.151 [2024-10-08 18:33:36.316714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.316742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.316757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.316980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.317194] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.151 [2024-10-08 18:33:36.317213] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.151 [2024-10-08 18:33:36.317225] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.151 [2024-10-08 18:33:36.320197] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.151 [2024-10-08 18:33:36.329675] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.151 [2024-10-08 18:33:36.330050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.330078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.330094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.330347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.330552] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.151 [2024-10-08 18:33:36.330572] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.151 [2024-10-08 18:33:36.330585] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.151 [2024-10-08 18:33:36.333625] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.151 [2024-10-08 18:33:36.342880] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.151 [2024-10-08 18:33:36.343261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.343311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.343328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.343571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.343803] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.151 [2024-10-08 18:33:36.343823] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.151 [2024-10-08 18:33:36.343836] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.151 [2024-10-08 18:33:36.346859] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.151 [2024-10-08 18:33:36.356209] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.151 [2024-10-08 18:33:36.356585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.356613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.356629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.356865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.357063] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.151 [2024-10-08 18:33:36.357082] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.151 [2024-10-08 18:33:36.357094] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.151 [2024-10-08 18:33:36.360189] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.151 [2024-10-08 18:33:36.369551] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.151 [2024-10-08 18:33:36.369988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.151 [2024-10-08 18:33:36.370021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.151 [2024-10-08 18:33:36.370044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.151 [2024-10-08 18:33:36.370295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.151 [2024-10-08 18:33:36.370500] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.152 [2024-10-08 18:33:36.370520] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.152 [2024-10-08 18:33:36.370532] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.152 [2024-10-08 18:33:36.373791] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.152 [2024-10-08 18:33:36.383308] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.152 [2024-10-08 18:33:36.383672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.152 [2024-10-08 18:33:36.383701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.152 [2024-10-08 18:33:36.383717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.152 [2024-10-08 18:33:36.383946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.152 [2024-10-08 18:33:36.384160] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.152 [2024-10-08 18:33:36.384179] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.152 [2024-10-08 18:33:36.384191] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.152 [2024-10-08 18:33:36.387197] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.152 [2024-10-08 18:33:36.396539] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.152 [2024-10-08 18:33:36.396945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.152 [2024-10-08 18:33:36.396987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.152 [2024-10-08 18:33:36.397004] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.152 [2024-10-08 18:33:36.397232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.152 [2024-10-08 18:33:36.397473] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.152 [2024-10-08 18:33:36.397494] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.152 [2024-10-08 18:33:36.397506] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.152 [2024-10-08 18:33:36.400490] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.152 [2024-10-08 18:33:36.409855] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.152 [2024-10-08 18:33:36.410167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.152 [2024-10-08 18:33:36.410209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.152 [2024-10-08 18:33:36.410224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.152 [2024-10-08 18:33:36.410497] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.152 [2024-10-08 18:33:36.410721] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.152 [2024-10-08 18:33:36.410740] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.152 [2024-10-08 18:33:36.410752] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.152 [2024-10-08 18:33:36.413735] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.152 [2024-10-08 18:33:36.423080] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.152 [2024-10-08 18:33:36.423418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.152 [2024-10-08 18:33:36.423446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.152 [2024-10-08 18:33:36.423462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.152 [2024-10-08 18:33:36.423685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.152 [2024-10-08 18:33:36.423902] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.152 [2024-10-08 18:33:36.423921] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.152 [2024-10-08 18:33:36.423933] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.152 [2024-10-08 18:33:36.426917] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.152 [2024-10-08 18:33:36.436581] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.152 [2024-10-08 18:33:36.436943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.152 [2024-10-08 18:33:36.436971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.152 [2024-10-08 18:33:36.436988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.437216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.437467] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-10-08 18:33:36.437488] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-10-08 18:33:36.437501] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-10-08 18:33:36.440503] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-10-08 18:33:36.449775] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-10-08 18:33:36.450176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-10-08 18:33:36.450202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-10-08 18:33:36.450217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.450484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.450721] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-10-08 18:33:36.450740] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-10-08 18:33:36.450753] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-10-08 18:33:36.453734] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-10-08 18:33:36.462973] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-10-08 18:33:36.463350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-10-08 18:33:36.463379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-10-08 18:33:36.463395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.463636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.463835] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-10-08 18:33:36.463854] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-10-08 18:33:36.463866] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-10-08 18:33:36.466888] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-10-08 18:33:36.476243] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-10-08 18:33:36.476582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-10-08 18:33:36.476626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-10-08 18:33:36.476642] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.476872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.477087] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-10-08 18:33:36.477106] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-10-08 18:33:36.477118] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-10-08 18:33:36.480139] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-10-08 18:33:36.489407] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-10-08 18:33:36.489796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-10-08 18:33:36.489839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-10-08 18:33:36.489854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.490109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.490351] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-10-08 18:33:36.490371] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-10-08 18:33:36.490384] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-10-08 18:33:36.493366] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-10-08 18:33:36.502734] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-10-08 18:33:36.503168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-10-08 18:33:36.503195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-10-08 18:33:36.503217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.503458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.503699] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-10-08 18:33:36.503718] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-10-08 18:33:36.503731] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-10-08 18:33:36.506672] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-10-08 18:33:36.515988] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-10-08 18:33:36.516365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-10-08 18:33:36.516394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-10-08 18:33:36.516411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.516639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.516855] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-10-08 18:33:36.516874] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-10-08 18:33:36.516886] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-10-08 18:33:36.519947] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-10-08 18:33:36.529252] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-10-08 18:33:36.529609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-10-08 18:33:36.529637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-10-08 18:33:36.529653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.529881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.530095] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-10-08 18:33:36.530114] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-10-08 18:33:36.530127] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-10-08 18:33:36.533147] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-10-08 18:33:36.542624] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-10-08 18:33:36.543009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-10-08 18:33:36.543051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-10-08 18:33:36.543067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.543320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.543526] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-10-08 18:33:36.543550] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-10-08 18:33:36.543564] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-10-08 18:33:36.546506] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-10-08 18:33:36.555951] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-10-08 18:33:36.556352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-10-08 18:33:36.556381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-10-08 18:33:36.556397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.556630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.556846] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.411 [2024-10-08 18:33:36.556865] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.411 [2024-10-08 18:33:36.556877] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.411 [2024-10-08 18:33:36.559981] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.411 [2024-10-08 18:33:36.569210] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.411 [2024-10-08 18:33:36.569562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.411 [2024-10-08 18:33:36.569590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.411 [2024-10-08 18:33:36.569606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.411 [2024-10-08 18:33:36.569835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.411 [2024-10-08 18:33:36.570051] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-10-08 18:33:36.570070] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-10-08 18:33:36.570082] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-10-08 18:33:36.573088] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-10-08 18:33:36.582506] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-10-08 18:33:36.582963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-10-08 18:33:36.582991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-10-08 18:33:36.583006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.412 [2024-10-08 18:33:36.583248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.412 [2024-10-08 18:33:36.583483] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-10-08 18:33:36.583504] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-10-08 18:33:36.583517] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-10-08 18:33:36.586603] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-10-08 18:33:36.595872] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-10-08 18:33:36.596253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-10-08 18:33:36.596281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-10-08 18:33:36.596307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.412 [2024-10-08 18:33:36.596536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.412 [2024-10-08 18:33:36.596769] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-10-08 18:33:36.596788] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-10-08 18:33:36.596800] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-10-08 18:33:36.599716] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-10-08 18:33:36.609163] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-10-08 18:33:36.609564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-10-08 18:33:36.609606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-10-08 18:33:36.609622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.412 [2024-10-08 18:33:36.609892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.412 [2024-10-08 18:33:36.610091] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-10-08 18:33:36.610110] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-10-08 18:33:36.610122] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-10-08 18:33:36.613178] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-10-08 18:33:36.622466] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-10-08 18:33:36.622888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-10-08 18:33:36.622917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-10-08 18:33:36.622932] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.412 [2024-10-08 18:33:36.623160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.412 [2024-10-08 18:33:36.623405] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-10-08 18:33:36.623427] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-10-08 18:33:36.623440] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-10-08 18:33:36.626555] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-10-08 18:33:36.635893] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-10-08 18:33:36.636209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-10-08 18:33:36.636252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-10-08 18:33:36.636267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.412 [2024-10-08 18:33:36.636529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.412 [2024-10-08 18:33:36.636752] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-10-08 18:33:36.636773] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-10-08 18:33:36.636785] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-10-08 18:33:36.639905] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-10-08 18:33:36.649170] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-10-08 18:33:36.649523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-10-08 18:33:36.649551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-10-08 18:33:36.649567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.412 [2024-10-08 18:33:36.649798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.412 [2024-10-08 18:33:36.650014] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-10-08 18:33:36.650033] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-10-08 18:33:36.650045] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-10-08 18:33:36.653132] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-10-08 18:33:36.662465] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-10-08 18:33:36.662867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-10-08 18:33:36.662895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-10-08 18:33:36.662911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.412 [2024-10-08 18:33:36.663152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.412 [2024-10-08 18:33:36.663384] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-10-08 18:33:36.663406] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-10-08 18:33:36.663419] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-10-08 18:33:36.666363] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-10-08 18:33:36.675727] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-10-08 18:33:36.676096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-10-08 18:33:36.676138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-10-08 18:33:36.676154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.412 [2024-10-08 18:33:36.676420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.412 [2024-10-08 18:33:36.676626] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-10-08 18:33:36.676659] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-10-08 18:33:36.676677] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-10-08 18:33:36.679702] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.412 [2024-10-08 18:33:36.689019] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.412 [2024-10-08 18:33:36.689422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.412 [2024-10-08 18:33:36.689450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.412 [2024-10-08 18:33:36.689466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.412 [2024-10-08 18:33:36.689696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.412 [2024-10-08 18:33:36.689911] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.412 [2024-10-08 18:33:36.689930] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.412 [2024-10-08 18:33:36.689942] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.412 [2024-10-08 18:33:36.692887] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.671 [2024-10-08 18:33:36.702325] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.671 [2024-10-08 18:33:36.702697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-10-08 18:33:36.702726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.671 [2024-10-08 18:33:36.702742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.671 [2024-10-08 18:33:36.702975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.671 [2024-10-08 18:33:36.703190] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.671 [2024-10-08 18:33:36.703210] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.671 [2024-10-08 18:33:36.703222] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.671 [2024-10-08 18:33:36.706229] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.671 [2024-10-08 18:33:36.715574] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.671 [2024-10-08 18:33:36.715962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-10-08 18:33:36.715990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.671 [2024-10-08 18:33:36.716006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.671 [2024-10-08 18:33:36.716241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.671 [2024-10-08 18:33:36.716469] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.671 [2024-10-08 18:33:36.716490] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.671 [2024-10-08 18:33:36.716503] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.671 [2024-10-08 18:33:36.719485] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.671 [2024-10-08 18:33:36.728808] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.671 [2024-10-08 18:33:36.729156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-10-08 18:33:36.729185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.671 [2024-10-08 18:33:36.729202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.671 [2024-10-08 18:33:36.729442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.671 [2024-10-08 18:33:36.729675] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.671 [2024-10-08 18:33:36.729694] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.671 [2024-10-08 18:33:36.729706] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.671 [2024-10-08 18:33:36.732730] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.671 [2024-10-08 18:33:36.742177] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.671 [2024-10-08 18:33:36.742599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.671 [2024-10-08 18:33:36.742642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.671 [2024-10-08 18:33:36.742658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.671 [2024-10-08 18:33:36.742910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.671 [2024-10-08 18:33:36.743108] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.672 [2024-10-08 18:33:36.743127] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.672 [2024-10-08 18:33:36.743139] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.672 [2024-10-08 18:33:36.746081] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.672 [2024-10-08 18:33:36.755539] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.672 [2024-10-08 18:33:36.755862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-10-08 18:33:36.755891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.672 [2024-10-08 18:33:36.755907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.672 [2024-10-08 18:33:36.756135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.672 [2024-10-08 18:33:36.756380] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.672 [2024-10-08 18:33:36.756400] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.672 [2024-10-08 18:33:36.756413] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.672 [2024-10-08 18:33:36.759392] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.672 [2024-10-08 18:33:36.768831] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.672 [2024-10-08 18:33:36.769212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-10-08 18:33:36.769241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.672 [2024-10-08 18:33:36.769257] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.672 [2024-10-08 18:33:36.769498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.672 [2024-10-08 18:33:36.769735] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.672 [2024-10-08 18:33:36.769755] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.672 [2024-10-08 18:33:36.769768] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.672 [2024-10-08 18:33:36.772785] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.672 [2024-10-08 18:33:36.782144] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.672 [2024-10-08 18:33:36.782481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-10-08 18:33:36.782524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.672 [2024-10-08 18:33:36.782540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.672 [2024-10-08 18:33:36.782777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.672 [2024-10-08 18:33:36.782990] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.672 [2024-10-08 18:33:36.783009] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.672 [2024-10-08 18:33:36.783022] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.672 [2024-10-08 18:33:36.786018] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.672 5380.75 IOPS, 21.02 MiB/s [2024-10-08T16:33:36.959Z] [2024-10-08 18:33:36.796956] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.672 [2024-10-08 18:33:36.797366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-10-08 18:33:36.797394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.672 [2024-10-08 18:33:36.797410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.672 [2024-10-08 18:33:36.797667] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.672 [2024-10-08 18:33:36.797867] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.672 [2024-10-08 18:33:36.797886] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.672 [2024-10-08 18:33:36.797898] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.672 [2024-10-08 18:33:36.800886] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.672 [2024-10-08 18:33:36.810275] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.672 [2024-10-08 18:33:36.810635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-10-08 18:33:36.810663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.672 [2024-10-08 18:33:36.810680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.672 [2024-10-08 18:33:36.810911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.672 [2024-10-08 18:33:36.811126] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.672 [2024-10-08 18:33:36.811145] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.672 [2024-10-08 18:33:36.811162] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.672 [2024-10-08 18:33:36.814144] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.672 [2024-10-08 18:33:36.823519] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.672 [2024-10-08 18:33:36.823928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-10-08 18:33:36.823955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.672 [2024-10-08 18:33:36.823970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.672 [2024-10-08 18:33:36.824193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.672 [2024-10-08 18:33:36.824441] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.672 [2024-10-08 18:33:36.824462] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.672 [2024-10-08 18:33:36.824475] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.672 [2024-10-08 18:33:36.827555] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.672 [2024-10-08 18:33:36.836728] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.672 [2024-10-08 18:33:36.837065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-10-08 18:33:36.837093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.672 [2024-10-08 18:33:36.837109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.672 [2024-10-08 18:33:36.837364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.672 [2024-10-08 18:33:36.837570] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.672 [2024-10-08 18:33:36.837590] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.672 [2024-10-08 18:33:36.837603] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.672 [2024-10-08 18:33:36.840601] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.672 [2024-10-08 18:33:36.850036] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.672 [2024-10-08 18:33:36.850406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-10-08 18:33:36.850434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.672 [2024-10-08 18:33:36.850449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.672 [2024-10-08 18:33:36.850692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.672 [2024-10-08 18:33:36.850905] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.672 [2024-10-08 18:33:36.850924] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.672 [2024-10-08 18:33:36.850936] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.672 [2024-10-08 18:33:36.853932] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.672 [2024-10-08 18:33:36.863216] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.672 [2024-10-08 18:33:36.863612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-10-08 18:33:36.863660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.672 [2024-10-08 18:33:36.863677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.672 [2024-10-08 18:33:36.863944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.672 [2024-10-08 18:33:36.864143] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.672 [2024-10-08 18:33:36.864162] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.672 [2024-10-08 18:33:36.864174] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.672 [2024-10-08 18:33:36.867154] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.672 [2024-10-08 18:33:36.876363] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.672 [2024-10-08 18:33:36.876770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.672 [2024-10-08 18:33:36.876798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.672 [2024-10-08 18:33:36.876814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.672 [2024-10-08 18:33:36.877056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.673 [2024-10-08 18:33:36.877271] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.673 [2024-10-08 18:33:36.877315] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.673 [2024-10-08 18:33:36.877331] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.673 [2024-10-08 18:33:36.880449] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.673 [2024-10-08 18:33:36.889879] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.673 [2024-10-08 18:33:36.890223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-10-08 18:33:36.890252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.673 [2024-10-08 18:33:36.890269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.673 [2024-10-08 18:33:36.890511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.673 [2024-10-08 18:33:36.890744] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.673 [2024-10-08 18:33:36.890764] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.673 [2024-10-08 18:33:36.890776] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.673 [2024-10-08 18:33:36.894007] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.673 [2024-10-08 18:33:36.903136] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.673 [2024-10-08 18:33:36.903493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-10-08 18:33:36.903520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.673 [2024-10-08 18:33:36.903536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.673 [2024-10-08 18:33:36.903769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.673 [2024-10-08 18:33:36.903991] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.673 [2024-10-08 18:33:36.904010] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.673 [2024-10-08 18:33:36.904022] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.673 [2024-10-08 18:33:36.907005] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.673 [2024-10-08 18:33:36.916506] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.673 [2024-10-08 18:33:36.916896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-10-08 18:33:36.916940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.673 [2024-10-08 18:33:36.916957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.673 [2024-10-08 18:33:36.917211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.673 [2024-10-08 18:33:36.917440] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.673 [2024-10-08 18:33:36.917460] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.673 [2024-10-08 18:33:36.917473] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.673 [2024-10-08 18:33:36.920396] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.673 [2024-10-08 18:33:36.929833] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.673 [2024-10-08 18:33:36.930205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-10-08 18:33:36.930233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.673 [2024-10-08 18:33:36.930249] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.673 [2024-10-08 18:33:36.930501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.673 [2024-10-08 18:33:36.930717] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.673 [2024-10-08 18:33:36.930737] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.673 [2024-10-08 18:33:36.930749] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.673 [2024-10-08 18:33:36.933770] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.673 [2024-10-08 18:33:36.943038] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.673 [2024-10-08 18:33:36.943449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-10-08 18:33:36.943492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.673 [2024-10-08 18:33:36.943507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.673 [2024-10-08 18:33:36.943778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.673 [2024-10-08 18:33:36.943978] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.673 [2024-10-08 18:33:36.943997] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.673 [2024-10-08 18:33:36.944009] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.673 [2024-10-08 18:33:36.946958] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.673 [2024-10-08 18:33:36.956636] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.673 [2024-10-08 18:33:36.957047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.673 [2024-10-08 18:33:36.957074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.673 [2024-10-08 18:33:36.957090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.673 [2024-10-08 18:33:36.957355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.673 [2024-10-08 18:33:36.957574] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.673 [2024-10-08 18:33:36.957596] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.673 [2024-10-08 18:33:36.957609] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.972 [2024-10-08 18:33:36.960775] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.972 [2024-10-08 18:33:36.969800] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:36.970210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:36.970264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:36.970279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:36.970556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:36.970764] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.973 [2024-10-08 18:33:36.970783] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.973 [2024-10-08 18:33:36.970795] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.973 [2024-10-08 18:33:36.973650] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.973 [2024-10-08 18:33:36.982914] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:36.983279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:36.983315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:36.983332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:36.983573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:36.983781] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.973 [2024-10-08 18:33:36.983800] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.973 [2024-10-08 18:33:36.983812] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.973 [2024-10-08 18:33:36.986728] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.973 [2024-10-08 18:33:36.995985] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:36.996351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:36.996379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:36.996399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:36.996636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:36.996844] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.973 [2024-10-08 18:33:36.996863] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.973 [2024-10-08 18:33:36.996875] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.973 [2024-10-08 18:33:36.999749] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.973 [2024-10-08 18:33:37.009003] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:37.009385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:37.009427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:37.009443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:37.009689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:37.009882] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.973 [2024-10-08 18:33:37.009901] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.973 [2024-10-08 18:33:37.009912] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.973 [2024-10-08 18:33:37.012813] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.973 [2024-10-08 18:33:37.022197] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:37.022586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:37.022613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:37.022629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:37.022860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:37.023053] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.973 [2024-10-08 18:33:37.023072] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.973 [2024-10-08 18:33:37.023084] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.973 [2024-10-08 18:33:37.025943] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.973 [2024-10-08 18:33:37.035198] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:37.035713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:37.035755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:37.035772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:37.036020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:37.036213] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.973 [2024-10-08 18:33:37.036235] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.973 [2024-10-08 18:33:37.036248] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.973 [2024-10-08 18:33:37.039171] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.973 [2024-10-08 18:33:37.048376] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:37.048775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:37.048819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:37.048835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:37.049086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:37.049307] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.973 [2024-10-08 18:33:37.049327] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.973 [2024-10-08 18:33:37.049357] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.973 [2024-10-08 18:33:37.052278] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.973 [2024-10-08 18:33:37.061727] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:37.062219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:37.062250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:37.062281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:37.062544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:37.062756] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.973 [2024-10-08 18:33:37.062774] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.973 [2024-10-08 18:33:37.062786] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.973 [2024-10-08 18:33:37.065726] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.973 [2024-10-08 18:33:37.074981] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:37.075381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:37.075409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:37.075424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:37.075691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:37.075884] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.973 [2024-10-08 18:33:37.075903] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.973 [2024-10-08 18:33:37.075916] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.973 [2024-10-08 18:33:37.078927] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.973 [2024-10-08 18:33:37.088106] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:37.088497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:37.088526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:37.088542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:37.088793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:37.089002] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.973 [2024-10-08 18:33:37.089020] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.973 [2024-10-08 18:33:37.089033] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.973 [2024-10-08 18:33:37.091935] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.973 [2024-10-08 18:33:37.101223] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.973 [2024-10-08 18:33:37.101593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.973 [2024-10-08 18:33:37.101661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.973 [2024-10-08 18:33:37.101676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.973 [2024-10-08 18:33:37.101912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.973 [2024-10-08 18:33:37.102105] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.974 [2024-10-08 18:33:37.102123] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.974 [2024-10-08 18:33:37.102135] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.974 [2024-10-08 18:33:37.105001] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.974 [2024-10-08 18:33:37.114557] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.974 [2024-10-08 18:33:37.114905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.974 [2024-10-08 18:33:37.114932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.974 [2024-10-08 18:33:37.114948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.974 [2024-10-08 18:33:37.115183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.974 [2024-10-08 18:33:37.115409] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.974 [2024-10-08 18:33:37.115430] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.974 [2024-10-08 18:33:37.115442] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.974 [2024-10-08 18:33:37.118392] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.974 [2024-10-08 18:33:37.127759] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.974 [2024-10-08 18:33:37.128139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.974 [2024-10-08 18:33:37.128167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.974 [2024-10-08 18:33:37.128183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.974 [2024-10-08 18:33:37.128455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.974 [2024-10-08 18:33:37.128676] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.974 [2024-10-08 18:33:37.128695] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.974 [2024-10-08 18:33:37.128708] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.974 [2024-10-08 18:33:37.132039] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.974 [2024-10-08 18:33:37.141003] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.974 [2024-10-08 18:33:37.141369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.974 [2024-10-08 18:33:37.141399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.974 [2024-10-08 18:33:37.141416] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.974 [2024-10-08 18:33:37.141630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.974 [2024-10-08 18:33:37.141839] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.974 [2024-10-08 18:33:37.141858] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.974 [2024-10-08 18:33:37.141870] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.974 [2024-10-08 18:33:37.144832] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.974 [2024-10-08 18:33:37.154321] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.974 [2024-10-08 18:33:37.154811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.974 [2024-10-08 18:33:37.154853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.974 [2024-10-08 18:33:37.154870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.974 [2024-10-08 18:33:37.155120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.974 [2024-10-08 18:33:37.155357] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.974 [2024-10-08 18:33:37.155377] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.974 [2024-10-08 18:33:37.155390] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.974 [2024-10-08 18:33:37.158266] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.974 [2024-10-08 18:33:37.167483] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.974 [2024-10-08 18:33:37.167914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.974 [2024-10-08 18:33:37.167965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.974 [2024-10-08 18:33:37.167980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.974 [2024-10-08 18:33:37.168238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.974 [2024-10-08 18:33:37.168461] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.974 [2024-10-08 18:33:37.168481] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.974 [2024-10-08 18:33:37.168498] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.974 [2024-10-08 18:33:37.171404] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.974 [2024-10-08 18:33:37.180648] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.974 [2024-10-08 18:33:37.181011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.974 [2024-10-08 18:33:37.181053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.974 [2024-10-08 18:33:37.181068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.974 [2024-10-08 18:33:37.181331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.974 [2024-10-08 18:33:37.181529] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.974 [2024-10-08 18:33:37.181548] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.974 [2024-10-08 18:33:37.181560] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.974 [2024-10-08 18:33:37.184462] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.974 [2024-10-08 18:33:37.193758] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.974 [2024-10-08 18:33:37.194122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.974 [2024-10-08 18:33:37.194150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.974 [2024-10-08 18:33:37.194165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.974 [2024-10-08 18:33:37.194411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.974 [2024-10-08 18:33:37.194626] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.974 [2024-10-08 18:33:37.194645] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.974 [2024-10-08 18:33:37.194657] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.974 [2024-10-08 18:33:37.197540] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:18.974 [2024-10-08 18:33:37.206839] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:18.974 [2024-10-08 18:33:37.207203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.974 [2024-10-08 18:33:37.207231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:18.974 [2024-10-08 18:33:37.207246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:18.974 [2024-10-08 18:33:37.207498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:18.974 [2024-10-08 18:33:37.207725] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:18.974 [2024-10-08 18:33:37.207744] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:18.974 [2024-10-08 18:33:37.207755] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:18.974 [2024-10-08 18:33:37.210714] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.233 [2024-10-08 18:33:37.220280] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.233 [2024-10-08 18:33:37.220788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-10-08 18:33:37.220830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.233 [2024-10-08 18:33:37.220846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.233 [2024-10-08 18:33:37.221078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.233 [2024-10-08 18:33:37.221316] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.233 [2024-10-08 18:33:37.221351] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.233 [2024-10-08 18:33:37.221364] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.233 [2024-10-08 18:33:37.224261] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.233 [2024-10-08 18:33:37.233540] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.233 [2024-10-08 18:33:37.234047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-10-08 18:33:37.234088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.233 [2024-10-08 18:33:37.234104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.233 [2024-10-08 18:33:37.234347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.233 [2024-10-08 18:33:37.234546] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.233 [2024-10-08 18:33:37.234565] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.233 [2024-10-08 18:33:37.234577] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.233 [2024-10-08 18:33:37.237453] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.233 [2024-10-08 18:33:37.246629] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.233 [2024-10-08 18:33:37.246963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-10-08 18:33:37.246990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.233 [2024-10-08 18:33:37.247005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.233 [2024-10-08 18:33:37.247226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.233 [2024-10-08 18:33:37.247462] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.233 [2024-10-08 18:33:37.247482] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.233 [2024-10-08 18:33:37.247494] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.233 [2024-10-08 18:33:37.250392] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.233 [2024-10-08 18:33:37.259792] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.233 [2024-10-08 18:33:37.260172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-10-08 18:33:37.260199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.233 [2024-10-08 18:33:37.260215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.233 [2024-10-08 18:33:37.260476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.233 [2024-10-08 18:33:37.260696] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.233 [2024-10-08 18:33:37.260715] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.233 [2024-10-08 18:33:37.260727] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.233 [2024-10-08 18:33:37.263709] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.233 [2024-10-08 18:33:37.273029] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.233 [2024-10-08 18:33:37.273418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-10-08 18:33:37.273446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.233 [2024-10-08 18:33:37.273462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.233 [2024-10-08 18:33:37.273690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.233 [2024-10-08 18:33:37.273899] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.233 [2024-10-08 18:33:37.273918] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.233 [2024-10-08 18:33:37.273930] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.233 [2024-10-08 18:33:37.276770] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.233 [2024-10-08 18:33:37.286295] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.233 [2024-10-08 18:33:37.286671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.233 [2024-10-08 18:33:37.286713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.233 [2024-10-08 18:33:37.286729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.233 [2024-10-08 18:33:37.286978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.287171] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.287190] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.287202] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.290065] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.299478] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.299838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.299865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.234 [2024-10-08 18:33:37.299881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.234 [2024-10-08 18:33:37.300116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.300352] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.300372] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.300385] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.303264] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.312846] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.313256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.313319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.234 [2024-10-08 18:33:37.313336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.234 [2024-10-08 18:33:37.313597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.313791] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.313809] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.313821] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.316727] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.325912] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.326335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.326362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.234 [2024-10-08 18:33:37.326378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.234 [2024-10-08 18:33:37.326642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.326835] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.326853] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.326865] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.329787] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.339058] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.339458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.339501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.234 [2024-10-08 18:33:37.339516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.234 [2024-10-08 18:33:37.339779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.339972] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.339991] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.340002] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.342990] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.352222] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.352665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.352728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.234 [2024-10-08 18:33:37.352749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.234 [2024-10-08 18:33:37.352979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.353172] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.353190] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.353202] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.356209] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.365368] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.365773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.365800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.234 [2024-10-08 18:33:37.365816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.234 [2024-10-08 18:33:37.366034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.366235] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.366254] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.366280] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.369377] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.378742] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.379193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.379235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.234 [2024-10-08 18:33:37.379251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.234 [2024-10-08 18:33:37.379492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.379733] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.379752] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.379765] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.383207] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.392005] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.392389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.392426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.234 [2024-10-08 18:33:37.392442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.234 [2024-10-08 18:33:37.392669] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.392883] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.392902] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.392914] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.395905] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.405185] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.405559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.405596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.234 [2024-10-08 18:33:37.405627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.234 [2024-10-08 18:33:37.405863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.406056] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.406074] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.406086] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.409137] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.418362] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.418759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.418801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.234 [2024-10-08 18:33:37.418817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.234 [2024-10-08 18:33:37.419066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.234 [2024-10-08 18:33:37.419274] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.234 [2024-10-08 18:33:37.419302] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.234 [2024-10-08 18:33:37.419315] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.234 [2024-10-08 18:33:37.422096] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.234 [2024-10-08 18:33:37.431552] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.234 [2024-10-08 18:33:37.431933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.234 [2024-10-08 18:33:37.431974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.235 [2024-10-08 18:33:37.431990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.235 [2024-10-08 18:33:37.432243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.235 [2024-10-08 18:33:37.432480] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.235 [2024-10-08 18:33:37.432500] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.235 [2024-10-08 18:33:37.432512] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.235 [2024-10-08 18:33:37.435417] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.235 [2024-10-08 18:33:37.444764] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.235 [2024-10-08 18:33:37.445065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.235 [2024-10-08 18:33:37.445106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.235 [2024-10-08 18:33:37.445121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.235 [2024-10-08 18:33:37.445365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.235 [2024-10-08 18:33:37.445571] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.235 [2024-10-08 18:33:37.445604] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.235 [2024-10-08 18:33:37.445616] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.235 [2024-10-08 18:33:37.448438] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.235 [2024-10-08 18:33:37.457776] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.235 [2024-10-08 18:33:37.458139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.235 [2024-10-08 18:33:37.458182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.235 [2024-10-08 18:33:37.458197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.235 [2024-10-08 18:33:37.458439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.235 [2024-10-08 18:33:37.458652] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.235 [2024-10-08 18:33:37.458671] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.235 [2024-10-08 18:33:37.458683] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.235 [2024-10-08 18:33:37.461619] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.235 [2024-10-08 18:33:37.470944] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.235 [2024-10-08 18:33:37.471274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.235 [2024-10-08 18:33:37.471308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.235 [2024-10-08 18:33:37.471339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.235 [2024-10-08 18:33:37.471583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.235 [2024-10-08 18:33:37.471792] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.235 [2024-10-08 18:33:37.471811] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.235 [2024-10-08 18:33:37.471823] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.235 [2024-10-08 18:33:37.474693] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.235 [2024-10-08 18:33:37.484102] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.235 [2024-10-08 18:33:37.484507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.235 [2024-10-08 18:33:37.484534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.235 [2024-10-08 18:33:37.484555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.235 [2024-10-08 18:33:37.484777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.235 [2024-10-08 18:33:37.484986] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.235 [2024-10-08 18:33:37.485004] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.235 [2024-10-08 18:33:37.485016] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.235 [2024-10-08 18:33:37.487888] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.235 [2024-10-08 18:33:37.497110] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.235 [2024-10-08 18:33:37.497487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.235 [2024-10-08 18:33:37.497530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.235 [2024-10-08 18:33:37.497545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.235 [2024-10-08 18:33:37.497792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.235 [2024-10-08 18:33:37.497984] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.235 [2024-10-08 18:33:37.498003] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.235 [2024-10-08 18:33:37.498014] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.235 [2024-10-08 18:33:37.500862] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.235 [2024-10-08 18:33:37.510280] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.235 [2024-10-08 18:33:37.510648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.235 [2024-10-08 18:33:37.510691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.235 [2024-10-08 18:33:37.510706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.235 [2024-10-08 18:33:37.510952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.235 [2024-10-08 18:33:37.511160] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.235 [2024-10-08 18:33:37.511178] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.235 [2024-10-08 18:33:37.511190] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.235 [2024-10-08 18:33:37.514247] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.494 [2024-10-08 18:33:37.523513] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.494 [2024-10-08 18:33:37.523896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.494 [2024-10-08 18:33:37.523923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.494 [2024-10-08 18:33:37.523938] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.494 [2024-10-08 18:33:37.524186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.494 [2024-10-08 18:33:37.524423] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.494 [2024-10-08 18:33:37.524449] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.494 [2024-10-08 18:33:37.524462] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.494 [2024-10-08 18:33:37.527603] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.494 [2024-10-08 18:33:37.536575] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.494 [2024-10-08 18:33:37.536940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.494 [2024-10-08 18:33:37.536966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.494 [2024-10-08 18:33:37.536982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.494 [2024-10-08 18:33:37.537200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.494 [2024-10-08 18:33:37.537437] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.494 [2024-10-08 18:33:37.537457] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.494 [2024-10-08 18:33:37.537469] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.494 [2024-10-08 18:33:37.540365] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.494 [2024-10-08 18:33:37.549603] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.494 [2024-10-08 18:33:37.550026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.494 [2024-10-08 18:33:37.550054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.494 [2024-10-08 18:33:37.550069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.494 [2024-10-08 18:33:37.550314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.494 [2024-10-08 18:33:37.550513] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.494 [2024-10-08 18:33:37.550532] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.494 [2024-10-08 18:33:37.550544] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.494 [2024-10-08 18:33:37.553440] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.494 [2024-10-08 18:33:37.562673] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.494 [2024-10-08 18:33:37.563006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.494 [2024-10-08 18:33:37.563033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.494 [2024-10-08 18:33:37.563048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.494 [2024-10-08 18:33:37.563271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.494 [2024-10-08 18:33:37.563513] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.494 [2024-10-08 18:33:37.563534] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.494 [2024-10-08 18:33:37.563547] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.494 [2024-10-08 18:33:37.566499] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.494 [2024-10-08 18:33:37.575725] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.494 [2024-10-08 18:33:37.576090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.494 [2024-10-08 18:33:37.576132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.494 [2024-10-08 18:33:37.576147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.494 [2024-10-08 18:33:37.576405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.494 [2024-10-08 18:33:37.576605] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.494 [2024-10-08 18:33:37.576624] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.494 [2024-10-08 18:33:37.576650] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.494 [2024-10-08 18:33:37.579538] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.494 [2024-10-08 18:33:37.588747] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.494 [2024-10-08 18:33:37.589141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.494 [2024-10-08 18:33:37.589212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.494 [2024-10-08 18:33:37.589228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.494 [2024-10-08 18:33:37.589489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.494 [2024-10-08 18:33:37.589717] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.494 [2024-10-08 18:33:37.589735] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.494 [2024-10-08 18:33:37.589747] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.494 [2024-10-08 18:33:37.592653] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.494 [2024-10-08 18:33:37.601876] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.494 [2024-10-08 18:33:37.602250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.494 [2024-10-08 18:33:37.602301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.494 [2024-10-08 18:33:37.602320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.494 [2024-10-08 18:33:37.602574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.494 [2024-10-08 18:33:37.602784] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.494 [2024-10-08 18:33:37.602803] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.494 [2024-10-08 18:33:37.602815] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.494 [2024-10-08 18:33:37.605717] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.494 [2024-10-08 18:33:37.614922] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.494 [2024-10-08 18:33:37.615284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.494 [2024-10-08 18:33:37.615318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.615334] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.615575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.615784] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.615802] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.495 [2024-10-08 18:33:37.615814] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.495 [2024-10-08 18:33:37.618648] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.495 [2024-10-08 18:33:37.628023] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.495 [2024-10-08 18:33:37.628342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.495 [2024-10-08 18:33:37.628368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.628383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.628600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.628810] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.628829] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.495 [2024-10-08 18:33:37.628840] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.495 [2024-10-08 18:33:37.632207] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.495 [2024-10-08 18:33:37.641143] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.495 [2024-10-08 18:33:37.641559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.495 [2024-10-08 18:33:37.641602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.641618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.641839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.642048] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.642066] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.495 [2024-10-08 18:33:37.642078] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.495 [2024-10-08 18:33:37.645088] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.495 [2024-10-08 18:33:37.654369] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.495 [2024-10-08 18:33:37.654752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.495 [2024-10-08 18:33:37.654793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.654809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.655061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.655282] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.655311] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.495 [2024-10-08 18:33:37.655329] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.495 [2024-10-08 18:33:37.658127] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.495 [2024-10-08 18:33:37.667394] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.495 [2024-10-08 18:33:37.667784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.495 [2024-10-08 18:33:37.667811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.667826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.668049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.668275] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.668317] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.495 [2024-10-08 18:33:37.668331] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.495 [2024-10-08 18:33:37.671130] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.495 [2024-10-08 18:33:37.680480] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.495 [2024-10-08 18:33:37.680844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.495 [2024-10-08 18:33:37.680887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.680902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.681149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.681369] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.681389] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.495 [2024-10-08 18:33:37.681401] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.495 [2024-10-08 18:33:37.684280] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.495 [2024-10-08 18:33:37.693660] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.495 [2024-10-08 18:33:37.694026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.495 [2024-10-08 18:33:37.694068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.694084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.694368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.694588] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.694608] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.495 [2024-10-08 18:33:37.694621] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.495 [2024-10-08 18:33:37.697574] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.495 [2024-10-08 18:33:37.706758] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.495 [2024-10-08 18:33:37.707131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.495 [2024-10-08 18:33:37.707164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.707180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.707421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.707665] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.707685] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.495 [2024-10-08 18:33:37.707697] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.495 [2024-10-08 18:33:37.710567] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.495 [2024-10-08 18:33:37.719897] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.495 [2024-10-08 18:33:37.720370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.495 [2024-10-08 18:33:37.720398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.720430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.720684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.720877] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.720896] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.495 [2024-10-08 18:33:37.720908] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.495 [2024-10-08 18:33:37.723811] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.495 [2024-10-08 18:33:37.733073] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.495 [2024-10-08 18:33:37.733455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.495 [2024-10-08 18:33:37.733496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.733513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.733734] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.733942] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.733961] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.495 [2024-10-08 18:33:37.733972] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.495 [2024-10-08 18:33:37.736798] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.495 [2024-10-08 18:33:37.746201] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.495 [2024-10-08 18:33:37.746598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.495 [2024-10-08 18:33:37.746641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.495 [2024-10-08 18:33:37.746657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.495 [2024-10-08 18:33:37.746925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.495 [2024-10-08 18:33:37.747123] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.495 [2024-10-08 18:33:37.747142] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.496 [2024-10-08 18:33:37.747154] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.496 [2024-10-08 18:33:37.750094] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.496 [2024-10-08 18:33:37.759441] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.496 [2024-10-08 18:33:37.759825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-10-08 18:33:37.759867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.496 [2024-10-08 18:33:37.759883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.496 [2024-10-08 18:33:37.760137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.496 [2024-10-08 18:33:37.760404] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.496 [2024-10-08 18:33:37.760426] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.496 [2024-10-08 18:33:37.760439] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.496 [2024-10-08 18:33:37.763352] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.496 [2024-10-08 18:33:37.772529] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.496 [2024-10-08 18:33:37.772893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.496 [2024-10-08 18:33:37.772919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.496 [2024-10-08 18:33:37.772933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.496 [2024-10-08 18:33:37.773147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.496 [2024-10-08 18:33:37.773373] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.496 [2024-10-08 18:33:37.773394] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.496 [2024-10-08 18:33:37.773407] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.496 [2024-10-08 18:33:37.776315] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.754 [2024-10-08 18:33:37.785769] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.754 [2024-10-08 18:33:37.786142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.754 [2024-10-08 18:33:37.786170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.754 [2024-10-08 18:33:37.786186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.754 [2024-10-08 18:33:37.786424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.754 [2024-10-08 18:33:37.786672] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.754 [2024-10-08 18:33:37.786691] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.754 [2024-10-08 18:33:37.786703] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.754 [2024-10-08 18:33:37.789698] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.754 4304.60 IOPS, 16.81 MiB/s [2024-10-08T16:33:38.041Z] [2024-10-08 18:33:37.800169] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.754 [2024-10-08 18:33:37.800477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.754 [2024-10-08 18:33:37.800504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.754 [2024-10-08 18:33:37.800519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.754 [2024-10-08 18:33:37.800714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.754 [2024-10-08 18:33:37.800922] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.754 [2024-10-08 18:33:37.800940] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.754 [2024-10-08 18:33:37.800951] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.754 [2024-10-08 18:33:37.803783] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.754 [2024-10-08 18:33:37.813254] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.754 [2024-10-08 18:33:37.813625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.754 [2024-10-08 18:33:37.813667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.754 [2024-10-08 18:33:37.813683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.754 [2024-10-08 18:33:37.813929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.754 [2024-10-08 18:33:37.814122] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.754 [2024-10-08 18:33:37.814140] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.754 [2024-10-08 18:33:37.814152] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.754 [2024-10-08 18:33:37.816971] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.754 [2024-10-08 18:33:37.826332] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.754 [2024-10-08 18:33:37.826823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.754 [2024-10-08 18:33:37.826864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.754 [2024-10-08 18:33:37.826880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.754 [2024-10-08 18:33:37.827108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.754 [2024-10-08 18:33:37.827342] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.754 [2024-10-08 18:33:37.827363] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.754 [2024-10-08 18:33:37.827376] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.754 [2024-10-08 18:33:37.830238] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.754 [2024-10-08 18:33:37.839443] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.754 [2024-10-08 18:33:37.839837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.754 [2024-10-08 18:33:37.839868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.754 [2024-10-08 18:33:37.839884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.754 [2024-10-08 18:33:37.840106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.754 [2024-10-08 18:33:37.840341] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.840361] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.840374] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.843173] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.852709] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.853071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.853114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.755 [2024-10-08 18:33:37.853129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.755 [2024-10-08 18:33:37.853407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.755 [2024-10-08 18:33:37.853619] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.853638] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.853649] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.856480] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.865727] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.866135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.866190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.755 [2024-10-08 18:33:37.866206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.755 [2024-10-08 18:33:37.866481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.755 [2024-10-08 18:33:37.866680] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.866699] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.866711] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.869663] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.878864] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.879311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.879358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.755 [2024-10-08 18:33:37.879374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.755 [2024-10-08 18:33:37.879613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.755 [2024-10-08 18:33:37.879832] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.879851] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.879863] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.883201] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.892244] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.892686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.892743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.755 [2024-10-08 18:33:37.892758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.755 [2024-10-08 18:33:37.893000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.755 [2024-10-08 18:33:37.893193] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.893211] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.893223] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.896177] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.905304] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.905633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.905659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.755 [2024-10-08 18:33:37.905673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.755 [2024-10-08 18:33:37.905889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.755 [2024-10-08 18:33:37.906098] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.906116] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.906128] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.909057] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.918372] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.918738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.918766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.755 [2024-10-08 18:33:37.918782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.755 [2024-10-08 18:33:37.919018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.755 [2024-10-08 18:33:37.919227] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.919246] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.919258] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.922222] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.931487] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.931824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.931852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.755 [2024-10-08 18:33:37.931867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.755 [2024-10-08 18:33:37.932103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.755 [2024-10-08 18:33:37.932325] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.932345] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.932357] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.935233] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.944487] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.944865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.944905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.755 [2024-10-08 18:33:37.944922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.755 [2024-10-08 18:33:37.945143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.755 [2024-10-08 18:33:37.945379] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.945399] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.945412] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.948311] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.957803] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.958194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.958236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.755 [2024-10-08 18:33:37.958252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.755 [2024-10-08 18:33:37.958504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.755 [2024-10-08 18:33:37.958752] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.958772] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.958785] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.961913] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.971136] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.971498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.971548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.755 [2024-10-08 18:33:37.971570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.755 [2024-10-08 18:33:37.971803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.755 [2024-10-08 18:33:37.971996] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.755 [2024-10-08 18:33:37.972014] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.755 [2024-10-08 18:33:37.972026] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.755 [2024-10-08 18:33:37.975073] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.755 [2024-10-08 18:33:37.984568] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.755 [2024-10-08 18:33:37.984932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.755 [2024-10-08 18:33:37.984959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.756 [2024-10-08 18:33:37.984975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.756 [2024-10-08 18:33:37.985211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.756 [2024-10-08 18:33:37.985460] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.756 [2024-10-08 18:33:37.985482] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.756 [2024-10-08 18:33:37.985495] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.756 [2024-10-08 18:33:37.988544] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.756 [2024-10-08 18:33:37.997713] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.756 [2024-10-08 18:33:37.998078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.756 [2024-10-08 18:33:37.998120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.756 [2024-10-08 18:33:37.998136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.756 [2024-10-08 18:33:37.998395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.756 [2024-10-08 18:33:37.998608] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.756 [2024-10-08 18:33:37.998627] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.756 [2024-10-08 18:33:37.998639] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.756 [2024-10-08 18:33:38.001523] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.756 [2024-10-08 18:33:38.010863] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.756 [2024-10-08 18:33:38.011261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.756 [2024-10-08 18:33:38.011309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.756 [2024-10-08 18:33:38.011363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.756 [2024-10-08 18:33:38.011611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.756 [2024-10-08 18:33:38.011805] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.756 [2024-10-08 18:33:38.011827] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.756 [2024-10-08 18:33:38.011840] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.756 [2024-10-08 18:33:38.014670] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.756 [2024-10-08 18:33:38.023980] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.756 [2024-10-08 18:33:38.024343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.756 [2024-10-08 18:33:38.024370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.756 [2024-10-08 18:33:38.024386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.756 [2024-10-08 18:33:38.024621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.756 [2024-10-08 18:33:38.024829] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.756 [2024-10-08 18:33:38.024848] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.756 [2024-10-08 18:33:38.024860] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:19.756 [2024-10-08 18:33:38.027780] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:19.756 [2024-10-08 18:33:38.037078] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:19.756 [2024-10-08 18:33:38.037442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:19.756 [2024-10-08 18:33:38.037471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:19.756 [2024-10-08 18:33:38.037487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:19.756 [2024-10-08 18:33:38.037745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:19.756 [2024-10-08 18:33:38.037970] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:19.756 [2024-10-08 18:33:38.037989] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:19.756 [2024-10-08 18:33:38.038003] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.014 [2024-10-08 18:33:38.041332] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.014 [2024-10-08 18:33:38.050430] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.014 [2024-10-08 18:33:38.050833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-10-08 18:33:38.050859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.014 [2024-10-08 18:33:38.050875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.014 [2024-10-08 18:33:38.051112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.014 [2024-10-08 18:33:38.051355] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.014 [2024-10-08 18:33:38.051376] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.014 [2024-10-08 18:33:38.051388] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.014 [2024-10-08 18:33:38.054376] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.014 [2024-10-08 18:33:38.063511] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.014 [2024-10-08 18:33:38.063942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-10-08 18:33:38.063984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.014 [2024-10-08 18:33:38.064001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.014 [2024-10-08 18:33:38.064241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.014 [2024-10-08 18:33:38.064479] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.014 [2024-10-08 18:33:38.064499] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.014 [2024-10-08 18:33:38.064511] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.014 [2024-10-08 18:33:38.067412] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.014 [2024-10-08 18:33:38.076652] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.014 [2024-10-08 18:33:38.077009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-10-08 18:33:38.077036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.014 [2024-10-08 18:33:38.077066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.014 [2024-10-08 18:33:38.077324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.014 [2024-10-08 18:33:38.077539] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.014 [2024-10-08 18:33:38.077558] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.014 [2024-10-08 18:33:38.077570] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.014 [2024-10-08 18:33:38.080372] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.014 [2024-10-08 18:33:38.089844] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.014 [2024-10-08 18:33:38.090337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-10-08 18:33:38.090380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.014 [2024-10-08 18:33:38.090396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.014 [2024-10-08 18:33:38.090647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.014 [2024-10-08 18:33:38.090855] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.014 [2024-10-08 18:33:38.090873] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.014 [2024-10-08 18:33:38.090885] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.014 [2024-10-08 18:33:38.093748] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.014 [2024-10-08 18:33:38.102935] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.014 [2024-10-08 18:33:38.103328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.014 [2024-10-08 18:33:38.103356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.014 [2024-10-08 18:33:38.103371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.014 [2024-10-08 18:33:38.103602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.103828] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.103846] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.103858] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.106764] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.116074] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.116443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-10-08 18:33:38.116473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.015 [2024-10-08 18:33:38.116489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.015 [2024-10-08 18:33:38.116719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.116932] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.116952] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.116964] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.120054] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.129734] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.130111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-10-08 18:33:38.130140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.015 [2024-10-08 18:33:38.130156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.015 [2024-10-08 18:33:38.130380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.130615] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.130636] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.130664] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.134186] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.143072] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.143415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-10-08 18:33:38.143444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.015 [2024-10-08 18:33:38.143460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.015 [2024-10-08 18:33:38.143703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.143897] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.143915] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.143931] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.146937] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.156378] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.156827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-10-08 18:33:38.156868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.015 [2024-10-08 18:33:38.156884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.015 [2024-10-08 18:33:38.157124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.157366] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.157388] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.157402] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.160462] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.169726] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.170091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-10-08 18:33:38.170134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.015 [2024-10-08 18:33:38.170149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.015 [2024-10-08 18:33:38.170401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.170627] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.170660] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.170672] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.173685] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.182868] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.183196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-10-08 18:33:38.183224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.015 [2024-10-08 18:33:38.183239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.015 [2024-10-08 18:33:38.183507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.183719] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.183738] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.183750] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.186689] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.196010] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.196378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-10-08 18:33:38.196427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.015 [2024-10-08 18:33:38.196444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.015 [2024-10-08 18:33:38.196711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.196904] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.196923] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.196934] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.199841] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.209259] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.209622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-10-08 18:33:38.209649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.015 [2024-10-08 18:33:38.209665] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.015 [2024-10-08 18:33:38.209886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.210097] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.210116] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.210127] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.213066] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.223008] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.223368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-10-08 18:33:38.223397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.015 [2024-10-08 18:33:38.223413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.015 [2024-10-08 18:33:38.223628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.223854] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.223874] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.223887] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.227032] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.236258] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.236728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.015 [2024-10-08 18:33:38.236780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.015 [2024-10-08 18:33:38.236795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.015 [2024-10-08 18:33:38.237010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.015 [2024-10-08 18:33:38.237224] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.015 [2024-10-08 18:33:38.237243] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.015 [2024-10-08 18:33:38.237255] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.015 [2024-10-08 18:33:38.240300] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.015 [2024-10-08 18:33:38.249461] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.015 [2024-10-08 18:33:38.249925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-10-08 18:33:38.249967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.016 [2024-10-08 18:33:38.249985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.016 [2024-10-08 18:33:38.250225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.016 [2024-10-08 18:33:38.250468] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.016 [2024-10-08 18:33:38.250489] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.016 [2024-10-08 18:33:38.250501] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.016 [2024-10-08 18:33:38.253441] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.016 [2024-10-08 18:33:38.262579] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.016 [2024-10-08 18:33:38.263065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-10-08 18:33:38.263116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.016 [2024-10-08 18:33:38.263132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.016 [2024-10-08 18:33:38.263388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.016 [2024-10-08 18:33:38.263588] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.016 [2024-10-08 18:33:38.263621] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.016 [2024-10-08 18:33:38.263633] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.016 [2024-10-08 18:33:38.266520] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.016 [2024-10-08 18:33:38.275868] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.016 [2024-10-08 18:33:38.276232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-10-08 18:33:38.276259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.016 [2024-10-08 18:33:38.276300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.016 [2024-10-08 18:33:38.276541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.016 [2024-10-08 18:33:38.276751] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.016 [2024-10-08 18:33:38.276770] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.016 [2024-10-08 18:33:38.276782] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.016 [2024-10-08 18:33:38.279690] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.016 [2024-10-08 18:33:38.289044] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.016 [2024-10-08 18:33:38.289517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.016 [2024-10-08 18:33:38.289569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.016 [2024-10-08 18:33:38.289584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.016 [2024-10-08 18:33:38.289825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.016 [2024-10-08 18:33:38.290017] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.016 [2024-10-08 18:33:38.290036] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.016 [2024-10-08 18:33:38.290048] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.016 [2024-10-08 18:33:38.292955] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.274 [2024-10-08 18:33:38.302567] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.274 [2024-10-08 18:33:38.302990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.274 [2024-10-08 18:33:38.303041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.274 [2024-10-08 18:33:38.303056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.274 [2024-10-08 18:33:38.303283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.274 [2024-10-08 18:33:38.303504] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.274 [2024-10-08 18:33:38.303524] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.274 [2024-10-08 18:33:38.303536] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.274 [2024-10-08 18:33:38.306775] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.274 [2024-10-08 18:33:38.315695] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.274 [2024-10-08 18:33:38.316107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.274 [2024-10-08 18:33:38.316160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.274 [2024-10-08 18:33:38.316175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.274 [2024-10-08 18:33:38.316441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.274 [2024-10-08 18:33:38.316655] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.274 [2024-10-08 18:33:38.316674] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.274 [2024-10-08 18:33:38.316685] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.274 [2024-10-08 18:33:38.319577] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.274 [2024-10-08 18:33:38.328927] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.274 [2024-10-08 18:33:38.329302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.274 [2024-10-08 18:33:38.329330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.274 [2024-10-08 18:33:38.329368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.274 [2024-10-08 18:33:38.329611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.274 [2024-10-08 18:33:38.329818] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.274 [2024-10-08 18:33:38.329836] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.274 [2024-10-08 18:33:38.329848] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.274 [2024-10-08 18:33:38.332761] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.274 [2024-10-08 18:33:38.342106] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.274 [2024-10-08 18:33:38.342496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.274 [2024-10-08 18:33:38.342525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.274 [2024-10-08 18:33:38.342541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.274 [2024-10-08 18:33:38.342782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.274 [2024-10-08 18:33:38.342990] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.274 [2024-10-08 18:33:38.343009] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.274 [2024-10-08 18:33:38.343021] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.274 [2024-10-08 18:33:38.345921] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1357168 Killed "${NVMF_APP[@]}" "$@" 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1358258 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1358258 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1358258 ']' 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:20.274 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:20.274 [2024-10-08 18:33:38.355563] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.274 [2024-10-08 18:33:38.355919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.274 [2024-10-08 18:33:38.355950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.274 [2024-10-08 18:33:38.355966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.274 [2024-10-08 18:33:38.356189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.274 [2024-10-08 18:33:38.356441] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.274 [2024-10-08 18:33:38.356463] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.274 [2024-10-08 18:33:38.356476] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.274 [2024-10-08 18:33:38.359622] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.274 [2024-10-08 18:33:38.368961] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.274 [2024-10-08 18:33:38.369309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.274 [2024-10-08 18:33:38.369337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.274 [2024-10-08 18:33:38.369353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.274 [2024-10-08 18:33:38.369567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.274 [2024-10-08 18:33:38.369800] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.274 [2024-10-08 18:33:38.369819] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.274 [2024-10-08 18:33:38.369831] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.274 [2024-10-08 18:33:38.372985] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.274 [2024-10-08 18:33:38.382495] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.274 [2024-10-08 18:33:38.382870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.274 [2024-10-08 18:33:38.382899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.274 [2024-10-08 18:33:38.382916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.274 [2024-10-08 18:33:38.383130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.274 [2024-10-08 18:33:38.383398] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.274 [2024-10-08 18:33:38.383420] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.274 [2024-10-08 18:33:38.383434] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.274 [2024-10-08 18:33:38.386950] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.274 [2024-10-08 18:33:38.395966] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.274 [2024-10-08 18:33:38.396388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.274 [2024-10-08 18:33:38.396417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.274 [2024-10-08 18:33:38.396434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.274 [2024-10-08 18:33:38.396664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.274 [2024-10-08 18:33:38.396892] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.274 [2024-10-08 18:33:38.396912] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.274 [2024-10-08 18:33:38.396924] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.274 [2024-10-08 18:33:38.400051] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.274 [2024-10-08 18:33:38.401248] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:26:20.274 [2024-10-08 18:33:38.401347] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.274 [2024-10-08 18:33:38.409316] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.274 [2024-10-08 18:33:38.409660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.409687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.409703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.409910] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.410125] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.410144] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.410156] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.413320] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.422676] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.422994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.423021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.423037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.423253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.423502] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.423524] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.423538] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.426645] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.436033] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.436388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.436417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.436434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.436664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.436879] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.436903] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.436917] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.439980] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.449385] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.449804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.449831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.449861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.450097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.450325] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.450346] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.450360] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.453410] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.462787] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.463161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.463204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.463220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.463459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.463702] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.463721] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.463733] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.466749] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.468862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:20.275 [2024-10-08 18:33:38.476069] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.476558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.476592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.476611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.476856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.477056] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.477076] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.477090] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.480413] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.489598] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.490042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.490090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.490109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.490351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.490583] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.490603] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.490617] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.493714] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.502938] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.503323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.503352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.503369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.503585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.503799] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.503818] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.503831] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.506911] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.516146] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.516512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.516542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.516560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.516793] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.517007] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.517026] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.517039] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.520033] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.529412] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.529885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.529919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.529949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.530208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.530441] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.530463] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.530477] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.533536] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.542788] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.543297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.543343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.543362] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.543599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.543802] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.543835] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.543863] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.546979] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.275 [2024-10-08 18:33:38.556089] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.275 [2024-10-08 18:33:38.556532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.275 [2024-10-08 18:33:38.556562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.275 [2024-10-08 18:33:38.556586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.275 [2024-10-08 18:33:38.556816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.275 [2024-10-08 18:33:38.557030] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.275 [2024-10-08 18:33:38.557050] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.275 [2024-10-08 18:33:38.557062] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.275 [2024-10-08 18:33:38.560382] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.534 [2024-10-08 18:33:38.569528] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.534 [2024-10-08 18:33:38.569954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-10-08 18:33:38.569998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.534 [2024-10-08 18:33:38.570015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.534 [2024-10-08 18:33:38.570260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.534 [2024-10-08 18:33:38.570495] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.534 [2024-10-08 18:33:38.570530] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.534 [2024-10-08 18:33:38.570544] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.534 [2024-10-08 18:33:38.573538] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.534 [2024-10-08 18:33:38.579162] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.534 [2024-10-08 18:33:38.579192] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.534 [2024-10-08 18:33:38.579221] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.534 [2024-10-08 18:33:38.579233] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.534 [2024-10-08 18:33:38.579242] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.534 [2024-10-08 18:33:38.580100] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.534 [2024-10-08 18:33:38.580156] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.534 [2024-10-08 18:33:38.580159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.534 [2024-10-08 18:33:38.583112] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.534 [2024-10-08 18:33:38.583506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-10-08 18:33:38.583535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.534 [2024-10-08 18:33:38.583552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.534 [2024-10-08 18:33:38.583785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.534 [2024-10-08 18:33:38.583999] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.534 [2024-10-08 18:33:38.584019] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.534 [2024-10-08 18:33:38.584033] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.534 [2024-10-08 18:33:38.587209] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.534 [2024-10-08 18:33:38.596662] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.534 [2024-10-08 18:33:38.597201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-10-08 18:33:38.597239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.534 [2024-10-08 18:33:38.597259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.534 [2024-10-08 18:33:38.597491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.534 [2024-10-08 18:33:38.597727] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.534 [2024-10-08 18:33:38.597749] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.534 [2024-10-08 18:33:38.597764] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.534 [2024-10-08 18:33:38.600857] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.534 [2024-10-08 18:33:38.610350] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.534 [2024-10-08 18:33:38.610889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-10-08 18:33:38.610927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.534 [2024-10-08 18:33:38.610958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.534 [2024-10-08 18:33:38.611196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.534 [2024-10-08 18:33:38.611421] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.534 [2024-10-08 18:33:38.611442] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.534 [2024-10-08 18:33:38.611458] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.534 [2024-10-08 18:33:38.614711] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.534 [2024-10-08 18:33:38.623929] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.534 [2024-10-08 18:33:38.624451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-10-08 18:33:38.624490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.534 [2024-10-08 18:33:38.624510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.534 [2024-10-08 18:33:38.624747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.534 [2024-10-08 18:33:38.624963] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.534 [2024-10-08 18:33:38.624983] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.534 [2024-10-08 18:33:38.624999] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.534 [2024-10-08 18:33:38.628157] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.534 [2024-10-08 18:33:38.637484] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.534 [2024-10-08 18:33:38.637923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-10-08 18:33:38.637961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.534 [2024-10-08 18:33:38.637980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.534 [2024-10-08 18:33:38.638201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.534 [2024-10-08 18:33:38.638432] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.534 [2024-10-08 18:33:38.638454] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.534 [2024-10-08 18:33:38.638471] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.534 [2024-10-08 18:33:38.641791] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.534 [2024-10-08 18:33:38.651082] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.534 [2024-10-08 18:33:38.651555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-10-08 18:33:38.651595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.534 [2024-10-08 18:33:38.651615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.534 [2024-10-08 18:33:38.651852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.534 [2024-10-08 18:33:38.652067] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.535 [2024-10-08 18:33:38.652098] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.535 [2024-10-08 18:33:38.652113] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.535 [2024-10-08 18:33:38.655362] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.535 [2024-10-08 18:33:38.664803] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.535 [2024-10-08 18:33:38.665156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-10-08 18:33:38.665185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.535 [2024-10-08 18:33:38.665202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.535 [2024-10-08 18:33:38.665428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.535 [2024-10-08 18:33:38.665662] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.535 [2024-10-08 18:33:38.665683] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.535 [2024-10-08 18:33:38.665696] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.535 [2024-10-08 18:33:38.668860] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.535 [2024-10-08 18:33:38.678353] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.535 [2024-10-08 18:33:38.678708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-10-08 18:33:38.678736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.535 [2024-10-08 18:33:38.678752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.535 [2024-10-08 18:33:38.678966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.535 [2024-10-08 18:33:38.679184] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.535 [2024-10-08 18:33:38.679205] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.535 [2024-10-08 18:33:38.679219] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.535 [2024-10-08 18:33:38.682473] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.535 [2024-10-08 18:33:38.691923] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.535 [2024-10-08 18:33:38.692266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-10-08 18:33:38.692302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.535 [2024-10-08 18:33:38.692321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.535 [2024-10-08 18:33:38.692537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.535 [2024-10-08 18:33:38.692755] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.535 [2024-10-08 18:33:38.692776] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.535 [2024-10-08 18:33:38.692790] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:20.535 [2024-10-08 18:33:38.696090] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.535 [2024-10-08 18:33:38.705513] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.535 [2024-10-08 18:33:38.705864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-10-08 18:33:38.705892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.535 [2024-10-08 18:33:38.705909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.535 [2024-10-08 18:33:38.706123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.535 [2024-10-08 18:33:38.706380] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.535 [2024-10-08 18:33:38.706403] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.535 [2024-10-08 18:33:38.706417] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.535 [2024-10-08 18:33:38.709697] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:20.535 [2024-10-08 18:33:38.717956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.535 [2024-10-08 18:33:38.719051] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.535 [2024-10-08 18:33:38.719394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-10-08 18:33:38.719423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.535 [2024-10-08 18:33:38.719439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.535 [2024-10-08 18:33:38.719653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.535 [2024-10-08 18:33:38.719873] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.535 [2024-10-08 18:33:38.719893] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.535 [2024-10-08 18:33:38.719905] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.535 [2024-10-08 18:33:38.723060] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.535 [2024-10-08 18:33:38.732509] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.535 [2024-10-08 18:33:38.732967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-10-08 18:33:38.732995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.535 [2024-10-08 18:33:38.733011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.535 [2024-10-08 18:33:38.733254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.535 [2024-10-08 18:33:38.733494] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.535 [2024-10-08 18:33:38.733516] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.535 [2024-10-08 18:33:38.733530] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.535 [2024-10-08 18:33:38.736740] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.535 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:20.535 [2024-10-08 18:33:38.746084] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.535 [2024-10-08 18:33:38.746467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-10-08 18:33:38.746498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.536 [2024-10-08 18:33:38.746515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.536 [2024-10-08 18:33:38.746752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.536 [2024-10-08 18:33:38.746965] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.536 [2024-10-08 18:33:38.746986] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.536 [2024-10-08 18:33:38.746999] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.536 [2024-10-08 18:33:38.750257] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.536 [2024-10-08 18:33:38.759701] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.536 [2024-10-08 18:33:38.760206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-10-08 18:33:38.760243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.536 [2024-10-08 18:33:38.760262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.536 [2024-10-08 18:33:38.760493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.536 [2024-10-08 18:33:38.760727] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.536 [2024-10-08 18:33:38.760748] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.536 [2024-10-08 18:33:38.760763] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.536 Malloc0 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:20.536 [2024-10-08 18:33:38.764046] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:20.536 [2024-10-08 18:33:38.773206] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.536 [2024-10-08 18:33:38.773556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-10-08 18:33:38.773584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0db00 with addr=10.0.0.2, port=4420 00:26:20.536 [2024-10-08 18:33:38.773600] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0db00 is same with the state(6) to be set 00:26:20.536 [2024-10-08 18:33:38.773815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0db00 (9): Bad file descriptor 00:26:20.536 [2024-10-08 18:33:38.774063] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:20.536 [2024-10-08 18:33:38.774084] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:20.536 [2024-10-08 18:33:38.774098] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:20.536 [2024-10-08 18:33:38.777358] bdev_nvme.c:2219:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:20.536 [2024-10-08 18:33:38.781810] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.536 18:33:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1357466 00:26:20.536 [2024-10-08 18:33:38.786805] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:20.793 3587.17 IOPS, 14.01 MiB/s [2024-10-08T16:33:39.080Z] [2024-10-08 18:33:38.951347] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:22.656 4095.71 IOPS, 16.00 MiB/s [2024-10-08T16:33:41.874Z] 4638.00 IOPS, 18.12 MiB/s [2024-10-08T16:33:43.243Z] 5054.00 IOPS, 19.74 MiB/s [2024-10-08T16:33:44.174Z] 5378.60 IOPS, 21.01 MiB/s [2024-10-08T16:33:45.106Z] 5651.73 IOPS, 22.08 MiB/s [2024-10-08T16:33:46.038Z] 5879.42 IOPS, 22.97 MiB/s [2024-10-08T16:33:46.969Z] 6081.54 IOPS, 23.76 MiB/s [2024-10-08T16:33:47.962Z] 6250.50 IOPS, 24.42 MiB/s [2024-10-08T16:33:47.962Z] 6392.33 IOPS, 24.97 MiB/s 00:26:29.675 Latency(us) 00:26:29.675 [2024-10-08T16:33:47.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.675 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:29.675 Verification LBA range: start 0x0 length 0x4000 00:26:29.675 Nvme1n1 : 15.01 6395.57 24.98 10451.01 0.00 7575.11 582.54 22427.88 00:26:29.675 [2024-10-08T16:33:47.962Z] =================================================================================================================== 00:26:29.675 [2024-10-08T16:33:47.962Z] Total : 6395.57 24.98 10451.01 0.00 7575.11 582.54 22427.88 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:29.932 rmmod nvme_tcp 00:26:29.932 rmmod nvme_fabrics 00:26:29.932 rmmod nvme_keyring 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1358258 ']' 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1358258 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1358258 ']' 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1358258 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1358258 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1358258' 00:26:29.932 killing process with pid 1358258 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1358258 00:26:29.932 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1358258 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.196 18:33:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.730 18:33:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:32.730 00:26:32.730 real 0m22.947s 00:26:32.730 user 1m0.814s 00:26:32.730 sys 0m4.306s 00:26:32.730 18:33:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:32.730 18:33:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:32.730 ************************************ 00:26:32.730 END TEST nvmf_bdevperf 00:26:32.730 ************************************ 00:26:32.730 18:33:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:32.730 18:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:32.730 18:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:32.730 18:33:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.730 ************************************ 00:26:32.730 START TEST nvmf_target_disconnect 00:26:32.730 ************************************ 00:26:32.730 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:32.730 * Looking for test storage... 00:26:32.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:32.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.731 --rc genhtml_branch_coverage=1 00:26:32.731 --rc genhtml_function_coverage=1 00:26:32.731 --rc genhtml_legend=1 00:26:32.731 --rc geninfo_all_blocks=1 00:26:32.731 --rc geninfo_unexecuted_blocks=1 00:26:32.731 00:26:32.731 ' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:32.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.731 --rc genhtml_branch_coverage=1 00:26:32.731 --rc genhtml_function_coverage=1 00:26:32.731 --rc genhtml_legend=1 00:26:32.731 --rc geninfo_all_blocks=1 00:26:32.731 --rc geninfo_unexecuted_blocks=1 00:26:32.731 00:26:32.731 ' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:32.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.731 --rc genhtml_branch_coverage=1 00:26:32.731 --rc genhtml_function_coverage=1 00:26:32.731 --rc genhtml_legend=1 00:26:32.731 --rc geninfo_all_blocks=1 00:26:32.731 --rc geninfo_unexecuted_blocks=1 00:26:32.731 00:26:32.731 ' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:32.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.731 --rc genhtml_branch_coverage=1 00:26:32.731 --rc genhtml_function_coverage=1 00:26:32.731 --rc genhtml_legend=1 00:26:32.731 --rc geninfo_all_blocks=1 00:26:32.731 --rc geninfo_unexecuted_blocks=1 00:26:32.731 00:26:32.731 ' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:32.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:32.731 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.732 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:32.732 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:32.732 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:32.732 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.732 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.732 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.732 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:32.732 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:32.732 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:32.732 18:33:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.259 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:35.260 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:35.260 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:35.260 Found net devices under 0000:09:00.0: cvl_0_0 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:35.260 Found net devices under 0000:09:00.1: cvl_0_1 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.260 18:33:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:35.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:26:35.260 00:26:35.260 --- 10.0.0.2 ping statistics --- 00:26:35.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.260 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:26:35.260 00:26:35.260 --- 10.0.0.1 ping statistics --- 00:26:35.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.260 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.260 ************************************ 00:26:35.260 START TEST nvmf_target_disconnect_tc1 00:26:35.260 ************************************ 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:26:35.260 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:35.261 [2024-10-08 18:33:53.222038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.261 [2024-10-08 18:33:53.222110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1036000 with addr=10.0.0.2, port=4420 00:26:35.261 [2024-10-08 18:33:53.222151] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:35.261 [2024-10-08 18:33:53.222180] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:35.261 [2024-10-08 18:33:53.222210] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:35.261 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:35.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:35.261 Initializing NVMe Controllers 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:35.261 00:26:35.261 real 0m0.092s 00:26:35.261 user 0m0.039s 00:26:35.261 sys 0m0.052s 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:35.261 ************************************ 00:26:35.261 END TEST nvmf_target_disconnect_tc1 00:26:35.261 ************************************ 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:35.261 ************************************ 00:26:35.261 START TEST nvmf_target_disconnect_tc2 00:26:35.261 ************************************ 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1361427 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1361427 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1361427 ']' 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:35.261 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.261 [2024-10-08 18:33:53.336497] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:26:35.261 [2024-10-08 18:33:53.336577] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.261 [2024-10-08 18:33:53.400362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:35.261 [2024-10-08 18:33:53.508478] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.261 [2024-10-08 18:33:53.508531] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.261 [2024-10-08 18:33:53.508554] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.261 [2024-10-08 18:33:53.508580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.261 [2024-10-08 18:33:53.508590] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.261 [2024-10-08 18:33:53.510122] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:26:35.261 [2024-10-08 18:33:53.510176] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:26:35.261 [2024-10-08 18:33:53.510242] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:26:35.261 [2024-10-08 18:33:53.510245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.519 Malloc0 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.519 [2024-10-08 18:33:53.689341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.519 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.519 [2024-10-08 18:33:53.717610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.520 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.520 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:35.520 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.520 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:35.520 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.520 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1361460 00:26:35.520 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:35.520 18:33:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:38.070 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1361427 00:26:38.070 18:33:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Write completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Write completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Write completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Write completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Write completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Write completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Write completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Write completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Write completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Write completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.070 Read completed with error (sct=0, sc=8) 00:26:38.070 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 [2024-10-08 18:33:55.745697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 [2024-10-08 18:33:55.745997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 [2024-10-08 18:33:55.746375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Write completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.071 Read completed with error (sct=0, sc=8) 00:26:38.071 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Read completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 Write completed with error (sct=0, sc=8) 00:26:38.072 starting I/O failed 00:26:38.072 [2024-10-08 18:33:55.746698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:38.072 [2024-10-08 18:33:55.746953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.747004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.747097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.747126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.747220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.747250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.747373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.747408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.747501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.747529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.747655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.747683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.747772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.747800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.747914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.747943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.748084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.748112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.748252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.748297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.748398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.748426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.748523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.748552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.748674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.748702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.748816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.748845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.748955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.748983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.749077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.749105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.749219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.749247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.749350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.749378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.749485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.749513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.749639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.749667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.749759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.749789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.072 qpair failed and we were unable to recover it. 00:26:38.072 [2024-10-08 18:33:55.749881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.072 [2024-10-08 18:33:55.749909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.750025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.750053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.750163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.750191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.750325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.750353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.750471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.750499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.750620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.750679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.750797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.750826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.750974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.751002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.751111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.751139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.751294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.751324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.751424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.751453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.751574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.751611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.751712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.751739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.751822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.751850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.751932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.751961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.752098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.752126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.752257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.752317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.752446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.752475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.752567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.752605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.752700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.752727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.752897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.752939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.753078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.753106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.753248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.753297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.753426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.753455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.753551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.753589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.753761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.753837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.754032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.754060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.754179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.754206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.754323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.754364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.754488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.754517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.754672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.754700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.754839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.754866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.754949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.754976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.755050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.755077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.755218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.755245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.755386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.755427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.755543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.755605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.755700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.755730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.755872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.755900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.073 [2024-10-08 18:33:55.756033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.073 [2024-10-08 18:33:55.756071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.073 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.756226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.756254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.756358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.756386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.756484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.756511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.756630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.756657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.756746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.756773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.756913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.756941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.757034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.757061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.757182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.757209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.757311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.757339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.757433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.757478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.757595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.757625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.757742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.757770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.757855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.757883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.757968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.757996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.758072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.758100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.758196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.758224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.758343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.758373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.758495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.758523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.758651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.758679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.758792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.758819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.758933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.758960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.759056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.759084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.759172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.759201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.759349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.759390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.759481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.759511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.759656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.759684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.759835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.759862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.760000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.760028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.760134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.760163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.760302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.760331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.760421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.760449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.760567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.760600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.760720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.760747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.760866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.760893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.761009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.761036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.761144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.761171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.761267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.761320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.761414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.761442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.761537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.761588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.074 qpair failed and we were unable to recover it. 00:26:38.074 [2024-10-08 18:33:55.761715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.074 [2024-10-08 18:33:55.761744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.761865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.761893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.762006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.762034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.762129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.762158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.762245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.762272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.762402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.762431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.762516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.762544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.762688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.762715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.762831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.762859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.762946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.762975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.763121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.763170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.763316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.763346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.763442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.763470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.763573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.763611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.763752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.763780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.763900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.763929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.764042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.764070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.764189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.764217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.764342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.764373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.764462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.764491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.764616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.764644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.764752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.764779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.764889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.764917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.765041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.765081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.765185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.765214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.765324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.765353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.765472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.765500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.765624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.765652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.765772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.765800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.765881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.765910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.766000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.766028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.766138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.075 [2024-10-08 18:33:55.766166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.075 qpair failed and we were unable to recover it. 00:26:38.075 [2024-10-08 18:33:55.766278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.766315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.766431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.766458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.766596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.766636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.766759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.766789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.766927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.766955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.767067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.767100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.767241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.767269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.767369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.767396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.767508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.767536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.767668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.767696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.767807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.767835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.767956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.767985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.768072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.768099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.768212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.768252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.768419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.768450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.768542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.768582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.768678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.768706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.768850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.768880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.768993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.769046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.769175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.769216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.769326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.769355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.769446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.769474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.769587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.769614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.769699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.769727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.769819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.769846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.769937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.769966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.770063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.770103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.770202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.770230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.770355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.770384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.770494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.770522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.770617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.770644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.770765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.770794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.770883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.770916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.771034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.771063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.771178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.771206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.771312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.771341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.771428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.771456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.771572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.771607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.771745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.076 [2024-10-08 18:33:55.771774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.076 qpair failed and we were unable to recover it. 00:26:38.076 [2024-10-08 18:33:55.771885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.771912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.772001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.772031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.772147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.772179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.772263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.772296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.772408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.772436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.772529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.772557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.772669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.772697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.772816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.772844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.772950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.772991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.773089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.773117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.773199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.773227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.773367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.773395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.773484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.773512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.773659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.773686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.773797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.773825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.773939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.773967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.774110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.774140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.774278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.774311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.774429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.774456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.774568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.774597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.774719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.774747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.774863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.774890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.774982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.775011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.775118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.775159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.775312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.775353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.775470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.775499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.775601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.775629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.775768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.775796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.775886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.775914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.776031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.776059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.776168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.776195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.776314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.776342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.776421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.776448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.776584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.776631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.776756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.776786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.776881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.776910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.777032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.777060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.777143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.777172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.777293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.777322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.077 [2024-10-08 18:33:55.777418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.077 [2024-10-08 18:33:55.777446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.077 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.777594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.777634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.777729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.777758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.777875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.777903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.777989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.778017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.778171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.778211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.778370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.778401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.778489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.778517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.778618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.778645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.778730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.778757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.778896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.778923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.779039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.779066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.779200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.779227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.779333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.779363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.779483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.779513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.779760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.779813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.780011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.780039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.780150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.780178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.780298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.780327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.780440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.780468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.780595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.780623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.780735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.780769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.780856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.780884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.780965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.780993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.781073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.781100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.781229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.781269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.781407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.781437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.781582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.781611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.781751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.781778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.781913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.781940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.782057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.782083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.782165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.782192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.782300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.782328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.782441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.782469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.782567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.782600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.782691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.782719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.782810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.782838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.783015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.783043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.783127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.783155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.783281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.078 [2024-10-08 18:33:55.783319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.078 qpair failed and we were unable to recover it. 00:26:38.078 [2024-10-08 18:33:55.783433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.783460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.783621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.783662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.783761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.783790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.783916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.783944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.784032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.784060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.784166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.784207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.784354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.784385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.784504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.784532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.784656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.784684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.784823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.784850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.784940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.784968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.785082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.785110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.785191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.785218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.785318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.785346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.785433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.785461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.785594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.785635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.785757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.785786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.785871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.785899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.786015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.786042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.786165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.786193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.786321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.786362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.786484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.786518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.786678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.786706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.786824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.786852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.786992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.787020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.787121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.787161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.787317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.787346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.787491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.787519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.787629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.787657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.787740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.787768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.787852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.787881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.788011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.788040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.788176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.788217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.788325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.788357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.788471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.788500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.788647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.788676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.788764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.788792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.788877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.788906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.789017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.789046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.079 [2024-10-08 18:33:55.789158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.079 [2024-10-08 18:33:55.789186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.079 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.789307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.789336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.789455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.789483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.789578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.789606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.789721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.789749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.789859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.789886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.789969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.789998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.790106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.790146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.790265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.790312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.790400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.790434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.790515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.790543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.790671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.790699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.790810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.790839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.790949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.790976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.791093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.791120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.791231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.791259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.791373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.791402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.791487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.791514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.791631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.791658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.791777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.791804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.791910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.791937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.792062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.792091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.792179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.792208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.792332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.792360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.792477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.792505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.792618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.792646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.792760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.792788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.792932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.792962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.793039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.793067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.793184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.793211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.793334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.793363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.793483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.793511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.793606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.793634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.793745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.793773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.793864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.080 [2024-10-08 18:33:55.793891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.080 qpair failed and we were unable to recover it. 00:26:38.080 [2024-10-08 18:33:55.793997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.794024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.794138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.794170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.794254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.794296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.794383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.794411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.794531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.794558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.794682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.794709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.794845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.794872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.794990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.795018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.795155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.795182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.795300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.795329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.795496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.795525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.795637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.795671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.795793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.795820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.795916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.795944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.796021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.796048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.796133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.796161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.796262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.796321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.796427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.796467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.796615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.796655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.796794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.796845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.796991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.797041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.797151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.797179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.797266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.797312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.797405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.797432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.797553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.797594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.797737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.797790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.797952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.797980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.798090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.798118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.798209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.798242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.798371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.798400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.798537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.798565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.798705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.798732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.798811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.798838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.798984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.799035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.799121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.799148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.799248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.799299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.799426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.799455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.799544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.799581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.799697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.081 [2024-10-08 18:33:55.799725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.081 qpair failed and we were unable to recover it. 00:26:38.081 [2024-10-08 18:33:55.799841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.799870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.799966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.799993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.800106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.800133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.800313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.800342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.800458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.800485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.800634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.800661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.800782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.800809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.800924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.800951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.801059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.801086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.801228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.801256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.801415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.801442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.801538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.801566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.801661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.801688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.801780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.801807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.801943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.801970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.802055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.802084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.802201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.802229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.802336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.802364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.802461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.802488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.802604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.802632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.802744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.802771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.802907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.802934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.803027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.803054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.803144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.803170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.803278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.803314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.803430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.803457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.803553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.803584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.803663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.803690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.803810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.803838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.803963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.804009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.804141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.804181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.804306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.804348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.804494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.804524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.804623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.804651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.804742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.804769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.804887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.804916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.805004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.805031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.805148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.805176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.805299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.805326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.082 [2024-10-08 18:33:55.805463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.082 [2024-10-08 18:33:55.805490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.082 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.805574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.805606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.805717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.805744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.805831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.805859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.805947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.805974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.806097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.806127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.806210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.806238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.806376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.806416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.806531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.806559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.806677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.806703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.806819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.806846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.806984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.807010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.807094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.807121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.807210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.807240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.807350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.807391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.807507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.807536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.807652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.807680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.807807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.807835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.807949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.807977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.808085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.808112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.808235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.808264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.808399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.808427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.808512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.808540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.808628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.808657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.808781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.808809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.808923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.808951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.809057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.809097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.809230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.809270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.809372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.809400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.809538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.809565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.809665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.809697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.809805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.809832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.809925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.809952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.810065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.810092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.810226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.810267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.810418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.810448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.810565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.810601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.810717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.810745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.810859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.083 [2024-10-08 18:33:55.810887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.083 qpair failed and we were unable to recover it. 00:26:38.083 [2024-10-08 18:33:55.811001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.811028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.811116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.811145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.811303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.811344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.811437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.811467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.811549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.811588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.811717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.811745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.811886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.811914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.812053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.812080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.812187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.812216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.812362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.812401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.812506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.812535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.812621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.812652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.812869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.812897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.813017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.813045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.813140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.813180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.813323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.813364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.813479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.813508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.813629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.813657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.813804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.813832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.813950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.813977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.814094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.814122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.814223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.814264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.814361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.814391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.814500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.814529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.814647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.814675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.814788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.814816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.814928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.814956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.815069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.815097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.815192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.815232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.815370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.815411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.815513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.815541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.815682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.815710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.815827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.815855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.815965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.815992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.816105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.816132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.084 qpair failed and we were unable to recover it. 00:26:38.084 [2024-10-08 18:33:55.816207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.084 [2024-10-08 18:33:55.816234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.816408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.816449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.816548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.816588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.816707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.816735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.816882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.816910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.817029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.817057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.817141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.817170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.817305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.817333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.817418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.817446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.817550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.817599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.817699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.817728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.817866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.817894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.818031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.818058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.818151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.818179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.818279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.818328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.818417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.818446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.818595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.818623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.818733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.818761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.818913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.818977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.819095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.819122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.819262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.819300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.819394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.819422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.819501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.819529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.819623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.819658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.819872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.819900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.820014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.820042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.820159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.820187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.820306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.820335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.820457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.820484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.820606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.820634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.820715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.820743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.820848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.820876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.820984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.821012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.821105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.821133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.821250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.821297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.821375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.821403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.821515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.821544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.821657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.821685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.821805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.821832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.821939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.085 [2024-10-08 18:33:55.821967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.085 qpair failed and we were unable to recover it. 00:26:38.085 [2024-10-08 18:33:55.822044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.822072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.822179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.822207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.822315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.822343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.822463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.822491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.822620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.822660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.822748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.822777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.822869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.822898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.823034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.823062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.823184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.823212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.823323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.823363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.823498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.823528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.823610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.823638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.823758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.823786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.823898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.823926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.824021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.824049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.824134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.824162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.824300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.824329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.824420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.824448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.824539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.824567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.824706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.824735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.824821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.824848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.824962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.824990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.825083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.825111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.825249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.825304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.825387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.825415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.825522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.825550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.825677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.825705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.825810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.825837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.825914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.825943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.826051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.826079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.826231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.826272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.826420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.826461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.826561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.826599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.826693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.826720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.826834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.826862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.826970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.826998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.827106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.827134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.827264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.827323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.086 [2024-10-08 18:33:55.827419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.086 [2024-10-08 18:33:55.827449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.086 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.827565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.827597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.827710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.827739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.827849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.827877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.827995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.828023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.828111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.828139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.828238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.828299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.828427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.828456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.828573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.828603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.828686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.828713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.828850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.828879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.829014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.829041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.829138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.829166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.829271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.829326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.829476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.829504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.829592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.829620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.829737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.829764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.829884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.829911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.829995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.830022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.830157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.830198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.830350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.830390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.830517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.830547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.830693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.830722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.830814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.830841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.830959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.830989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.831075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.831108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.831251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.831295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.831381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.831408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.831525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.831553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.831650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.831678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.831791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.831819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.831935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.831963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.832100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.832127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.832248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.832294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.832388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.832415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.832503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.832530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.832620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.832648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.832763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.832790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.832902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.832930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.833053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.833081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.087 qpair failed and we were unable to recover it. 00:26:38.087 [2024-10-08 18:33:55.833213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.087 [2024-10-08 18:33:55.833253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.833392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.833432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.833554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.833583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.833702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.833729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.833843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.833871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.833956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.833985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.834063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.834091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.834190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.834231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.834339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.834368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.834457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.834485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.834566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.834594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.834717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.834770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.834929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.834987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.835107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.835136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.835273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.835306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.835422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.835449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.835594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.835647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.835755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.835782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.835900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.835930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.836015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.836043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.836199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.836239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.836343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.836373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.836462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.836491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.836607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.836635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.836727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.836755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.836835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.836870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.836984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.837011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.837122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.837150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.837249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.837296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.837421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.837450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.837533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.837560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.837701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.837729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.837838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.837866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.837975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.838004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.838156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.838197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.838350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.838380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.838470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.838498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.838610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.838638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.838779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.838807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.838898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.088 [2024-10-08 18:33:55.838926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.088 qpair failed and we were unable to recover it. 00:26:38.088 [2024-10-08 18:33:55.839019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.839048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.839142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.839182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.839281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.839317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.839414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.839442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.839550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.839577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.839694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.839721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.839856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.839884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.840000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.840028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.840114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.840144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.840228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.840256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.840354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.840382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.840478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.840507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.840621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.840654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.840799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.840827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.840940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.840969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.841087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.841115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.841226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.841253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.841345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.841374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.841467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.841495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.841590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.841619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.841718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.841746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.841831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.841858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.841947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.841976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.842080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.842120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.842220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.842249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.842390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.842419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.842544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.842572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.842662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.842689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.842802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.842830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.843016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.843076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.843193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.843220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.843321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.843362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.843459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.843489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.843583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.089 [2024-10-08 18:33:55.843612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.089 qpair failed and we were unable to recover it. 00:26:38.089 [2024-10-08 18:33:55.843699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.843727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.843866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.843894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.844050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.844118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.844235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.844264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.844395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.844424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.844524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.844552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.844668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.844695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.844791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.844818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.844932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.844961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.845062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.845089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.845209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.845236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.845380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.845409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.845550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.845577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.845683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.845710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.845799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.845827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.845952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.845993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.846091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.846119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.846209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.846239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.846386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.846419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.846541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.846568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.846688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.846716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.846919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.846972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.847058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.847086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.847204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.847232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.847319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.847347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.847453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.847493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.847593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.847622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.847711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.847738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.847912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.847964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.848044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.848072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.848183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.848211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.848332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.848373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.848475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.848504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.848648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.848676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.848787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.848814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.848924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.848952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.849093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.849120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.849213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.849240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.849344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.849384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.090 [2024-10-08 18:33:55.849476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.090 [2024-10-08 18:33:55.849505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.090 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.849619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.849648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.849788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.849816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.849935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.849964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.850045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.850073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.850161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.850189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.850302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.850344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.850437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.850466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.850585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.850613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.850727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.850754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.850839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.850867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.850984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.851014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.851137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.851166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.851284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.851318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.851400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.851427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.851516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.851543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.851656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.851683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.851800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.851827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.851909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.851937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.852061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.852108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.852228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.852257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.852390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.852420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.852530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.852558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.852654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.852682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.852771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.852798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.852959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.853016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.853160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.853188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.853282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.853334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.853449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.853478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.853565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.853593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.853760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.853816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.853971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.854029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.854164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.854191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.854341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.854370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.854487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.854515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.854628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.854656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.854788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.854816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.854934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.854975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.855108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.855137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.855272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.091 [2024-10-08 18:33:55.855321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.091 qpair failed and we were unable to recover it. 00:26:38.091 [2024-10-08 18:33:55.855425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.855455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.855536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.855564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.855705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.855733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.855848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.855875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.855989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.856017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.856156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.856183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.856300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.856333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.856451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.856478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.856596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.856625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.856723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.856750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.856894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.856922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.857044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.857071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.857163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.857191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.857302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.857344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.857470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.857499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.857616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.857644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.857755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.857783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.857873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.857902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.858063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.858103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.858198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.858226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.858323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.858350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.858464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.858491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.858597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.858624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.858775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.858802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.858916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.858943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.859054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.859081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.859161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.859188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.859297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.859325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.859445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.859472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.859565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.859592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.859700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.859727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.859862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.859890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.859970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.859998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.860151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.860192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.860327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.860367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.860496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.860536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.860633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.860661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.860781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.092 [2024-10-08 18:33:55.860836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.092 qpair failed and we were unable to recover it. 00:26:38.092 [2024-10-08 18:33:55.860986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.861037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.861167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.861195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.861314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.861342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.861456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.861484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.861594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.861621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.861738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.861765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.861874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.861901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.861991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.862019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.862136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.862168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.862247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.862275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.862403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.862430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.862541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.862568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.862677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.862705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.862796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.862823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.862931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.862958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.863074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.863102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.863196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.863226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.863334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.863362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.863438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.863466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.863546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.863573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.863713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.863740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.863852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.863879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.863998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.864025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.864141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.864168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.864299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.864340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.864462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.864492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.864609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.864637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.864749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.864777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.864889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.864917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.865025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.865053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.865136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.865165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.865279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.865314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.865400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.865426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.093 [2024-10-08 18:33:55.865521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.093 [2024-10-08 18:33:55.865548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.093 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.865634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.865661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.865774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.865802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.865883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.865911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.866024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.866052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.866182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.866222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.866344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.866374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.866462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.866490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.866609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.866636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.866774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.866801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.866971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.867024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.867137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.867164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.867285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.867319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.867419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.867448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.867532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.867561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.867787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.867839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.868054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.868110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.868220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.868248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.868352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.868379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.868492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.868519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.868604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.868632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.868723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.868750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.868892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.868921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.869040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.869069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.869196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.869236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.869333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.869362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.869451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.869479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.869619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.869646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.869790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.869818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.870003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.870030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.870154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.870182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.870328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.870359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.870489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.870530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.870689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.870743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.870880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.870941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.871079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.871125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.871241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.871268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.871364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.871404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.094 [2024-10-08 18:33:55.871535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.094 [2024-10-08 18:33:55.871565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.094 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.871676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.871705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.871844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.871873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.871982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.872010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.872125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.872161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.872280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.872314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.872429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.872456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.872558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.872598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.872717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.872746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.872884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.872912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.872992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.873020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.873144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.873171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.873311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.873339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.873448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.873475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.873560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.873587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.873700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.873726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.873839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.873866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.873940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.873968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.874070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.874110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.874247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.874298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.874408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.874438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.874554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.874581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.874672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.874701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.874864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.874912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.875053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.875081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.875214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.875245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.875408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.875449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.875544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.875573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.875684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.875711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.875828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.875855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.875977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.876006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.876119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.876153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.876312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.876352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.876472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.876501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.876613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.876641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.876856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.876925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.877089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.877146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.877264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.877299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.095 [2024-10-08 18:33:55.877417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.095 [2024-10-08 18:33:55.877444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.095 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.877535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.877562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.877641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.877668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.877776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.877803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.877973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.878027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.878149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.878178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.878297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.878324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.878445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.878473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.878585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.878612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.878788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.878842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.879056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.879110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.879220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.879249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.879390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.879431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.879527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.879558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.879652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.879679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.879849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.879904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.880080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.880143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.880261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.880294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.880404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.880432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.880542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.880570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.880690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.880718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.880801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.880829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.880936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.880964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.881070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.881098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.881229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.881270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.881417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.881457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.881614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.881653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.881749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.881778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.881891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.881918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.882009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.882036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.882120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.882147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.882234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.882262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.882358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.882385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.882499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.882535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.882644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.882671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.882792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.882819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.882973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.883000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.883115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.096 [2024-10-08 18:33:55.883142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.096 qpair failed and we were unable to recover it. 00:26:38.096 [2024-10-08 18:33:55.883241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.883269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.883372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.883399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.883534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.883574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.883697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.883726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.883854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.883882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.883970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.883998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.884138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.884165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.884254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.884281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.884385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.884414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.884558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.884585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.884721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.884748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.884864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.884892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.885007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.885034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.885170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.885198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.885318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.885347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.885441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.885468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.885580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.885607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.885693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.885721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.885862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.885889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.886002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.886029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.886113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.886141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.886266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.886313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.886409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.886438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.886534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.886562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.886704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.886731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.886898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.886950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.887044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.887072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.887154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.887181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.887295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.887336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.887462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.887492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.887578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.887606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.887694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.887723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.887807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.887835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.888010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.888064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.097 qpair failed and we were unable to recover it. 00:26:38.097 [2024-10-08 18:33:55.888203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.097 [2024-10-08 18:33:55.888230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.888348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.888394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.888516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.888545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.888665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.888693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.888892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.888959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.889142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.889191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.889310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.889339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.889485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.889512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.889630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.889657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.889766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.889793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.889909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.889936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.890048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.890075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.890223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.890262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.890387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.890428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.890530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.890559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.890656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.890684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.890811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.890839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.890923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.890951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.891027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.891055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.891149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.891180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.891310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.891352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.891480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.891509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.891649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.891676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.891793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.891821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.891899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.891927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.892028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.892057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.892160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.892201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.892303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.892335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.892430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.892464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.892584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.892613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.892726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.892754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.892851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.892878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.892995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.098 [2024-10-08 18:33:55.893024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.098 qpair failed and we were unable to recover it. 00:26:38.098 [2024-10-08 18:33:55.893143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.893171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.893249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.893276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.893397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.893425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.893538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.893566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.893677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.893704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.893842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.893870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.893996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.894036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.894161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.894191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.894333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.894361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.894461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.894489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.894610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.894637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.894755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.894783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.894925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.894954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.895093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.895121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.895248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.895295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.895418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.895447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.895541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.895568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.895685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.895713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.895826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.895853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.895952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.895980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.896069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.896096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.896180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.896208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.896340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.896381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.896538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.896578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.896727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.896756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.896870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.896898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.896993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.897021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.897162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.897190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.897342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.897370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.897491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.897519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.897662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.897690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.897812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.897840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.897952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.897980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.898066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.898093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.898177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.898206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.898326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.898354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.099 [2024-10-08 18:33:55.898456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.099 [2024-10-08 18:33:55.898483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.099 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.898597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.898624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.898742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.898769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.898876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.898904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.898998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.899027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.899136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.899163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.899270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.899313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.899437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.899464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.899586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.899617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.899757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.899785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.899875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.899903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.899990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.900019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.900140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.900180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.900317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.900345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.900461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.900488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.900602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.900630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.900750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.900777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.900871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.900898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.901019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.901048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.901133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.901161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.901251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.901278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.901402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.901429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.901511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.901539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.901631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.901658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.901778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.901805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.901914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.901941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.902024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.902056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.902153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.902182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.902314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.902355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.902458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.902486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.902581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.902608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.902723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.902750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.902867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.902894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.902988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.903017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.903131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.903159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.903293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.903334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.903450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.903479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.903597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.903625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.903803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.903857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.100 qpair failed and we were unable to recover it. 00:26:38.100 [2024-10-08 18:33:55.904078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.100 [2024-10-08 18:33:55.904131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.904223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.904251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.904405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.904434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.904551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.904579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.904667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.904696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.904837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.904865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.904947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.904975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.905111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.905151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.905278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.905327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.905432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.905462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.905576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.905604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.905787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.905815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.905951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.905979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.906103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.906130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.906238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.906271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.906398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.906426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.906531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.906558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.906676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.906704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.906812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.906840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.906931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.906961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.907104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.907132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.907247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.907276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.907369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.907397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.907512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.907540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.907662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.907690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.907796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.907824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.907939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.907967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.908096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.908136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.908260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.908295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.908411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.908439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.908558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.908585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.908702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.908730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.908829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.908856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.908997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.909024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.909110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.909138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.909251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.101 [2024-10-08 18:33:55.909279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.101 qpair failed and we were unable to recover it. 00:26:38.101 [2024-10-08 18:33:55.909429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.909457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.909540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.909567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.909657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.909684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.909795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.909822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.909918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.909958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.910107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.910141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.910259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.910292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.910416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.910444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.910560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.910586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.910726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.910754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.910977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.911039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.911139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.911166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.911254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.911281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.911441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.911469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.911582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.911609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.911751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.911778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.911873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.911901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.912044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.912072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.912159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.912186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.912333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.912362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.912446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.912474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.912555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.912582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.912698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.912725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.912866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.912894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.913013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.913040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.913179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.913206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.913320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.913348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.913466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.913494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.913576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.913604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.913692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.913719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.913857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.913884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.914008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.914035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.914121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.914154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.914298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.914326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.914440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.102 [2024-10-08 18:33:55.914468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.102 qpair failed and we were unable to recover it. 00:26:38.102 [2024-10-08 18:33:55.914605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.914632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.914708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.914735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.914825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.914852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.914991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.915018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.915152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.915179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.915274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.915308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.915427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.915454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.915563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.915590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.915700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.915728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.915817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.915845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.915929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.915957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.916065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.916093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.916178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.916205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.916312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.916341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.916459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.916487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.916578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.916605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.916744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.916771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.916891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.916919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.917013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.917041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.917146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2004ff0 is same with the state(6) to be set 00:26:38.103 [2024-10-08 18:33:55.917304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.917344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.917467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.917497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.917586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.917613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.917729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.917758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.917867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.917895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.918011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.918038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.918147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.918175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.918253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.918280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.918375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.918402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.918489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.918517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.918698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.918768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.919054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.919121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.919313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.919342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.919460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.919489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.919601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.919629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.919772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.919799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.919936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.103 [2024-10-08 18:33:55.919989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.103 qpair failed and we were unable to recover it. 00:26:38.103 [2024-10-08 18:33:55.920078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.920106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.920255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.920295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.920390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.920419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.920533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.920560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.920746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.920813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.921006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.921076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.921313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.921363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.921445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.921472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.921606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.921633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.921759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.921826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.922146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.922212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.922404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.922432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.922551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.922630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.922851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.922918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.923152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.923219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.923477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.923504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.923619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.923647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.923777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.923804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.923896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.923924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.924087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.924155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.924353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.924381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.924476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.924504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.924625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.924652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.924738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.924765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.924902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.924930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.925069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.925137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.925394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.925421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.925560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.925642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.925972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.926042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.926282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.926357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.926460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.926487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.926598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.926625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.926742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.926770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.927036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.927103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.927319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.927347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.927460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.927488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.927586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.927614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.927707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.927734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.927822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.104 [2024-10-08 18:33:55.927850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.104 qpair failed and we were unable to recover it. 00:26:38.104 [2024-10-08 18:33:55.927943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.927971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.928169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.928237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.928471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.928503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.928680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.928747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.929019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.929086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.929304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.929366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.929488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.929515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.929652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.929679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.929769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.929796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.930085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.930152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.930365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.930393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.930508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.930536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.930651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.930679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.930810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.930878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.931127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.931196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.931412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.931440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.931560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.931588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.931691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.931718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.931894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.931962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.932191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.932257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.932441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.932468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.932554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.932612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.932801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.932828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.932940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.932967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.933137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.933204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.933408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.933436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.933575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.933602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.933857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.933885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.934149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.934217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.934424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.934452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.934571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.934599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.934716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.934743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.934858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.934885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.935142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.935208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.935437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.935465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.935570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.935597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.935793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.935860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.936157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.936233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.105 [2024-10-08 18:33:55.936508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.105 [2024-10-08 18:33:55.936578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.105 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.936801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.936870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.937125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.937192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.937410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.937480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.937756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.937835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.938039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.938109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.938356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.938425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.938711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.938777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.939056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.939083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.939197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.939224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.939340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.939368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.939449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.939477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.939655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.939722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.939958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.940024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.940318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.940385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.940672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.940738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.941032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.941098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.941381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.941449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.941715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.941782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.942039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.942104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.942390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.942459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.942717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.942787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.943084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.943160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.943400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.943469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.943747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.943812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.944056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.944123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.944389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.944457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.944752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.944817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.945102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.945129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.945241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.945268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.945394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.945460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.945778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.945850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.946141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.946209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.946523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.946595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.946865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.946932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.947181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.947249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.947460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.947528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.947816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.947884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.948178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.948244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.106 [2024-10-08 18:33:55.948546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.106 [2024-10-08 18:33:55.948573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.106 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.948737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.948764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.948960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.949025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.949226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.949312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.949620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.949696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.949988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.950031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.950150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.950177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.950319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.950354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.950594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.950661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.950959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.951036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.951349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.951417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.951718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.951795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.952044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.952111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.952412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.952489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.952785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.952852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.953133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.953199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.953457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.953525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.953816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.953882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.954096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.954165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.954422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.954491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.954787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.954855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.955144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.955209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.955472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.955540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.955806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.955873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.956159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.956224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.956526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.956594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.956890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.956917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.957033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.957061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.957215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.957279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.957591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.957655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.957939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.958001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.958206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.958269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.958585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.958650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.958900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.958968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.959242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.959345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.959623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.107 [2024-10-08 18:33:55.959689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.107 qpair failed and we were unable to recover it. 00:26:38.107 [2024-10-08 18:33:55.959981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.960047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.960310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.960378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.960622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.960688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.960957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.961023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.961321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.961389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.961635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.961708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.962014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.962082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.962379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.962447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.962705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.962771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.963034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.963113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.963377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.963445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.963743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.963809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.964056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.964125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.964364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.964433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.964686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.964752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.964999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.965066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.965328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.965396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.965644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.965711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.965912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.965979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.966259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.966357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.966587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.966654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.966951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.967017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.967327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.967394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.967702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.967770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.968022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.968088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.968379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.968447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.968698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.968775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.969061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.969129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.969391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.969459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.969713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.969783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.970026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.970092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.970398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.970465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.970753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.970820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.971036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.971104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.971412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.971478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.971772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.971839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.972071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.972140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.972366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.972435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.972648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.108 [2024-10-08 18:33:55.972716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.108 qpair failed and we were unable to recover it. 00:26:38.108 [2024-10-08 18:33:55.973031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.973099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.973352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.973419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.973680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.973748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.974002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.974070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.974363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.974430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.974726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.974794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.975108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.975177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.975424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.975494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.975791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.975858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.976162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.976229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.976500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.976581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.976844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.976910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.977174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.977241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.977469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.977549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.977823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.977889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.978138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.978204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.978440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.978507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.978762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.978828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.979111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.979178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.979478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.979549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.979837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.979903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.980132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.980199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.980485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.980552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.980744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.980810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.981107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.981174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.981492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.981567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.981819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.981886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.982139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.982205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.982482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.982550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.982784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.982852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.983056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.983123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.983381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.983450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.983705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.983774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.983982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.984049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.984335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.984404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.984698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.984764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.985017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.985083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.985354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.985421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.985694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.985761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.109 qpair failed and we were unable to recover it. 00:26:38.109 [2024-10-08 18:33:55.985961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.109 [2024-10-08 18:33:55.986028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.986321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.986389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.986594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.986665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.986876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.986944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.987226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.987324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.987550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.987623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.987882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.987949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.988221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.988317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.988537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.988604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.988853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.988921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.989205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.989271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.989574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.989651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.989907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.989935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.990021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.990049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.990145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.990173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.990293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.990321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.990414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.990441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.990535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.990594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.990847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.990925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.991214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.991325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.991619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.991686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.991978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.992048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.992338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.992406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.992690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.992756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.992974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.993041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.993308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.993375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.993617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.993683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.993947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.994015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.994253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.994280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.994420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.994447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.994620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.994687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.994980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.995047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.995317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.995387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.995668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.995744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.996033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.996099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.996383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.996452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.996694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.996770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.997058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.997124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.997396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.997466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.997717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.110 [2024-10-08 18:33:55.997783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.110 qpair failed and we were unable to recover it. 00:26:38.110 [2024-10-08 18:33:55.998026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:55.998094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:55.998404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:55.998482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:55.998775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:55.998842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:55.999120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:55.999188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:55.999480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:55.999550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:55.999773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:55.999840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.000089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.000156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.000417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.000486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.000736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.000805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.001064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.001131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.001422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.001450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.001564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.001596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.001767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.001833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.002054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.002122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.002338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.002407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.002672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.002748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.002992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.003059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.003339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.003430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.003728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.003796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.004041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.004108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.004429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.004498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.004787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.004861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.005153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.005221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.005486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.005554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.005780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.005847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.006125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.006192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.006491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.006559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.006816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.006883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.007140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.007207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.007472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.007541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.007794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.007861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.008113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.008181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.008450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.008518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.008732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.008798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.009050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.111 [2024-10-08 18:33:56.009119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.111 qpair failed and we were unable to recover it. 00:26:38.111 [2024-10-08 18:33:56.009356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.009425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.009674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.009742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.010029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.010096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.010412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.010480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.010719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.010786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.011076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.011142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.011422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.011491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.011744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.011810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.012074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.012139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.012433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.012501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.012786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.012853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.013139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.013204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.013565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.013633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.013929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.013996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.014247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.014275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.014372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.014400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.014492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.014556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.014818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.014882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.015115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.015184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.015478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.015547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.015754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.015820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.016020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.016086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.016342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.016411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.016700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.016767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.017063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.017131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.017394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.017465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.017726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.017793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.018080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.018146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.018429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.018497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.018711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.018778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.019056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.019122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.019412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.019480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.019732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.019802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.020094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.020160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.020471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.020540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.020748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.020804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.021026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.021094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.021350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.021418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.021675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.021743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.022027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.022093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.022317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.022386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.022643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.022709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.112 qpair failed and we were unable to recover it. 00:26:38.112 [2024-10-08 18:33:56.022956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.112 [2024-10-08 18:33:56.023022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.023300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.023370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.023617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.023684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.023934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.024001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.024223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.024304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.024563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.024630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.024896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.024962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.025248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.025333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.025578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.025646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.025953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.026030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.026280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.026379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.026676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.026743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.027000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.027068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.027359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.027428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.027721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.027799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.028038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.028106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.028346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.028416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.028695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.028761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.029011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.029078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.029335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.029405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.029655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.029720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.029940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.030010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.030222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.030303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.030546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.030612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.030850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.030917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.031177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.031246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.031501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.031569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.031843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.031911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.032202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.032271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.032523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.032593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.032874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.032942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.033189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.033255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.033538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.033617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.033862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.033929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.034220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.034318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.034591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.034659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.034861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.034929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.035138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.035206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.035480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.035550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.035796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.035865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.036155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.036223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.036544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.036616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.036853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.036921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.113 qpair failed and we were unable to recover it. 00:26:38.113 [2024-10-08 18:33:56.037155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.113 [2024-10-08 18:33:56.037225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.037533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.037606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.037888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.037955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.038261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.038296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.038425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.038451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.038582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.038650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.038942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.039010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.039341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.039369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.039502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.039530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.039641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.039666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.039784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.039811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.039932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.039960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.040255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.040347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.040561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.040639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.040884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.040950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.041240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.041318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.041550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.041618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.041872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.041939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.042205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.042272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.042563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.042638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.042932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.043000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.043261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.043361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.043621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.043688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.043929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.043998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.044246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.044332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.044639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.044706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.044996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.045023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.045186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.045218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.045414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.045483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.045713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.045782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.045965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.046033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.046322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.046390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.046591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.046666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.046876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.046945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.047210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.047277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.047581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.047650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.047936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.048005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.048297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.048366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.048605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.048685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.048968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.049034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.049277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.049367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.049573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.049603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.049866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.114 [2024-10-08 18:33:56.049934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.114 qpair failed and we were unable to recover it. 00:26:38.114 [2024-10-08 18:33:56.050191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.050259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.050559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.050627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.050874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.050941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.051127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.051178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.051423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.051491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.051794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.051861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.052110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.052180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.052443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.052511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.052756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.052823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.053089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.053159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.053470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.053538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.053786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.053852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.054101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.054169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.054461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.054528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.054788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.054855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.055079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.055147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.055404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.055472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.055698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.055765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.055986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.056055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.056362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.056431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.056735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.056802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.057059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.057127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.057417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.057486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.057720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.057789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.058003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.058070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.058353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.058423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.058715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.058781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.059008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.059075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.059369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.059439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.059701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.059768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.060030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.060097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.060392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.060460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.060749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.060817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.061071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.061137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.061443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.061513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.061760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.061838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.062156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.062224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.062459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.062527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.062728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.062795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.063077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.063144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.063407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.063477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.063734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.115 [2024-10-08 18:33:56.063801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.115 qpair failed and we were unable to recover it. 00:26:38.115 [2024-10-08 18:33:56.064017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.064086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.064376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.064447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.064742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.064810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.065056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.065125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.065390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.065460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.065694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.065762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.066010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.066077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.066348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.066418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.066668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.066735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.066976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.067004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.067122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.067148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.067241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.067349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.067568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.067633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.067891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.067958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.068252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.068333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.068631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.068699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.068973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.069040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.069301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.069369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.069611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.069682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.069968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.070036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.070274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.070360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.070647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.070714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.070989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.071058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.071322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.071393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.071621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.071688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.071932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.072000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.072304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.072374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.072617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.072686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.072965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.073033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.073304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.073373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.073596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.073662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.073915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.073983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.074174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.074244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.074564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.116 [2024-10-08 18:33:56.074643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.116 qpair failed and we were unable to recover it. 00:26:38.116 [2024-10-08 18:33:56.074897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.074966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.075226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.075312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.075559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.075626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.075865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.075933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.076220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.076306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.076560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.076629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.076862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.076931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.077191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.077259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.077490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.077557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.077844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.077913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.078128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.078196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.078461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.078531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.078756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.078823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.079085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.079154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.079440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.079510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.079756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.079825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.080121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.080187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.080464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.080535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.080818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.080886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.081131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.081201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.081498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.081566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.081818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.081885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.082140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.082207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.082413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.082484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.082715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.082783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.082988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.083057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.083329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.083398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.083604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.083672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.083927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.083994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.084255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.084355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.084620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.084689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.084918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.084985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.085209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.085278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.085513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.085581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.085873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.085942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.086236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.086316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.086546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.086613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.086898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.117 [2024-10-08 18:33:56.086966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.117 qpair failed and we were unable to recover it. 00:26:38.117 [2024-10-08 18:33:56.087226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.087311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.087537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.087617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.087820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.087887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.088099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.088168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.088382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.088451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.088719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.088786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.089084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.089152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.089443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.089513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.089797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.089865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.090109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.090177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.090456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.090485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.090600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.090627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.090772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.090839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.091120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.091186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.091417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.091485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.091786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.091854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.092093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.092160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.092400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.092469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.092715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.092785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.093067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.093134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.093366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.093435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.093626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.093693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.093948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.094028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.094275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.094355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.094641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.094709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.094909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.094980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.095267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.095301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.095546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.095613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.095859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.095927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.096215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.096282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.096600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.096667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.096965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.097032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.097267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.097355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.097601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.097670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.097932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.098005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.098254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.098341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.098596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.098663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.098912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.098979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.099282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.099365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.099613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.118 [2024-10-08 18:33:56.099683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.118 qpair failed and we were unable to recover it. 00:26:38.118 [2024-10-08 18:33:56.099908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.099977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.100225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.100332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.100588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.100657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.100927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.100994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.101276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.101362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.101647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.101714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.101919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.101986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.102215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.102283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.102595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.102663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.102911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.102978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.103262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.103348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.103609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.103676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.103927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.103995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.104177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.104244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.104550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.104617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.104893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.104960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.105204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.105273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.105515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.105584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.105846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.105913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.106198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.106265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.106492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.106558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.106760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.106827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.107078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.107145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.107453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.107522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.107765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.107833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.108026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.108093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.108343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.108412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.108608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.108676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.108984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.109052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.109352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.109421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.109669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.109738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.110028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.110094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.110382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.110451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.110710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.110779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.111062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.111129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.111385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.111454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.111665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.111734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.112028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.112095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.112385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.112454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.119 qpair failed and we were unable to recover it. 00:26:38.119 [2024-10-08 18:33:56.112713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.119 [2024-10-08 18:33:56.112782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.113032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.113101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.113346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.113427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.113628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.113699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.113987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.114054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.114315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.114384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.114675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.114743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.115021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.115089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.115334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.115403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.115687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.115755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.116044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.116111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.116363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.116433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.116678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.116745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.116991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.117058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.117346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.117414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.117605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.117674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.117955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.118023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.118316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.118385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.118669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.118737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.119004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.119072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.119358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.119425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.119686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.119754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.120006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.120074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.120280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.120365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.120614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.120681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.120946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.121012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.121317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.121386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.121665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.121733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.121975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.122044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.122345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.122415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.122700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.122768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.123032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.123099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.123357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.123427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.123732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.123799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.120 [2024-10-08 18:33:56.124060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.120 [2024-10-08 18:33:56.124126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.120 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.124372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.124442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.124684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.124752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.125040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.125107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.125346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.125417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.125646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.125714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.125994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.126061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.126363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.126432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.126678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.126756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.127053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.127122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.127370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.127438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.127693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.127761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.128049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.128116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.128358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.128427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.128676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.128743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.129036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.129103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.129357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.129426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.129683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.129751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.129949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.130017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.130216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.130284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.130564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.130631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.130925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.130991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.131314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.131383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.131628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.131699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.131991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.132058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.132341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.132411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.132683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.132752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.133035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.133101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.133347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.133417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.133699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.133767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.134027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.134095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.134352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.134421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.134709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.134778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.135074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.135141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.135400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.135470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.135787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.135855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.136117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.136184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.136444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.136512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.136796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.136862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.137104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.121 [2024-10-08 18:33:56.137171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.121 qpair failed and we were unable to recover it. 00:26:38.121 [2024-10-08 18:33:56.137465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.137534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.137787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.137854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.138097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.138163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.138412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.138482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.138683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.138749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.139004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.139071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.139327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.139396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.139653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.139721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.139930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.140009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.140319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.140387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.140654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.140721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.141006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.141075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.141331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.141399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.141691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.141761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.142063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.142131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.142379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.142448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.142714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.142782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.143061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.143128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.143379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.143447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.143687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.143757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.144034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.144102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.144307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.144376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.144677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.144746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.144990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.145058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.145254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.145336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.145625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.145694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.145984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.146051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.146304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.146375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.146660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.146729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.146987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.147054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.147318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.147388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.147645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.147714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.147979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.148047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.148344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.148413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.148700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.148768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.148997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.149065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.149275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.149356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.149600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.149669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.149910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.149977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.122 [2024-10-08 18:33:56.150224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.122 [2024-10-08 18:33:56.150305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.122 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.150569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.150635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.150859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.150926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.151216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.151283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.151549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.151616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.151861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.151935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.152215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.152283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.152565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.152634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.152885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.152951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.153232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.153331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.153596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.153665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.153907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.153975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.154267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.154353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.154599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.154666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.154959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.155026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.155322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.155391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.155597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.155665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.155964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.156030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.156269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.156366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.156634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.156701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.156922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.156989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.157278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.157360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.157649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.157715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.157971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.158038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.158302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.158371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.158625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.158693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.158988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.159055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.159332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.159400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.159664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.159731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.159981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.160049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.160314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.160384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.160624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.160692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.160938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.161006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.161250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.161333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.161573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.161641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.161866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.161933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.162179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.162247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.162504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.162571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.162799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.162866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.163105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.163174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.123 qpair failed and we were unable to recover it. 00:26:38.123 [2024-10-08 18:33:56.163446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.123 [2024-10-08 18:33:56.163515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.163803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.163870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.164157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.164224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.164526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.164595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.164855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.164923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.165161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.165229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.165457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.165526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.165752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.165819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.166111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.166179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.166434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.166514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.166736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.166800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.167109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.167176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.167443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.167514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.167720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.167787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.168034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.168103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.168392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.168462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.168749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.168817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.169065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.169131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.169398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.169467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.169727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.169794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.170008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.170069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.170311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.170375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.170619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.170680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.170960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.171032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.171226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.171301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.171476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.171539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.171763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.171825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.172039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.172100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.172339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.172404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.172624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.172687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.172939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.173002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.173230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.173307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.173547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.173609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.173817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.173879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.174143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.174205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.124 qpair failed and we were unable to recover it. 00:26:38.124 [2024-10-08 18:33:56.174452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.124 [2024-10-08 18:33:56.174515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.174715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.174779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.174961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.175022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.175277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.175350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.175578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.175642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.175815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.175878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.176136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.176199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.176500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.176565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.176861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.176928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.177240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.177321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.177563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.177626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.177828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.177891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.178167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.178229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.178502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.178571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.178818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.178899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.179142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.179204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.179442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.179506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.179766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.179835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.180097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.180159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.180406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.180471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.180710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.180778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.181081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.181144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.181387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.181451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.181650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.181713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.182009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.182079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.182375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.182445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.182757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.182819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.183106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.183174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.183448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.183518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.183785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.183852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.184142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.184210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.184531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.184600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.184888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.184954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.185230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.185316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.185579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.185648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.185874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.185940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.186223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.186307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.186558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.186628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.186920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.186987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.187271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.125 [2024-10-08 18:33:56.187355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.125 qpair failed and we were unable to recover it. 00:26:38.125 [2024-10-08 18:33:56.187645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.187712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.187932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.187999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.188311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.188379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.188663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.188731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.188950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.189016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.189345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.189414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.189617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.189685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.189971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.190038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.190325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.190395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.190642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.190712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.191010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.191077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.191331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.191400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.191669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.191737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.191984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.192053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.192357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.192436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.192720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.192790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.193068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.193136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.193385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.193455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.193752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.193820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.194112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.194181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.194445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.194513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.194796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.194864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.195112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.195180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.195492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.195561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.195812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.195880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.196169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.196236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.196511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.196578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.196848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.196916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.197226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.197327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.197580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.197647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.197954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.198021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.198322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.198393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.198682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.198749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.199036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.199105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.199328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.199396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.199688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.199754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.200021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.200089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.200384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.200453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.200749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.200817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.126 [2024-10-08 18:33:56.201102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.126 [2024-10-08 18:33:56.201171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.126 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.201439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.201507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.201799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.201867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.202139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.202206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.202509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.202577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.202866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.202933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.203135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.203203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.203482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.203550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.203793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.203862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.204159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.204226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.204533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.204601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.204895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.204961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.205205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.205273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.205560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.205628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.205880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.205946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.206225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.206310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.206612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.206680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.206918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.206986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.207185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.207253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.207530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.207598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.207853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.207920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.208166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.208232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.208524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.208593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.208830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.208897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.209158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.209226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.209492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.209561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.209845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.209913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.210124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.210191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.210440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.210508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.210810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.210878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.211138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.211206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.211525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.211594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.211845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.211914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.212178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.212246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.212550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.212617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.212925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.212993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.213329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.213399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.213644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.213711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.214008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.214076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.214380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.214450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.127 qpair failed and we were unable to recover it. 00:26:38.127 [2024-10-08 18:33:56.214704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.127 [2024-10-08 18:33:56.214772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.215062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.215129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.215371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.215459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.215761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.215828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.216074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.216143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.216406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.216476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.216786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.216853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.217119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.217187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.217501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.217570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.217846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.217914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.218201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.218268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.218537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.218604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.218896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.218963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.219258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.219342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.219594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.219662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.219897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.219965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.220232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.220315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.220543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.220610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.220893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.220962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.221221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.221320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.221545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.221614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.221909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.221976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.222220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.222307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.222536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.222604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.222804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.222872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.223149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.223215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.223480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.223549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.223760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.223830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.224116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.224183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.224474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.224543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.224840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.224910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.225207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.225273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.225589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.225656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.225940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.226008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.226313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.226382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.226667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.226734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.227034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.227102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.227385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.227454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.227704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.227772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.227985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.128 [2024-10-08 18:33:56.228054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.128 qpair failed and we were unable to recover it. 00:26:38.128 [2024-10-08 18:33:56.228241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.228325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.228613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.228681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.228923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.229001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.229318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.229387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.229643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.229712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.229962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.230030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.230321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.230390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.230689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.230757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.230998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.231066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.231321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.231389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.231651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.231719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.232000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.232067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.232367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.232435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.232722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.232790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.233054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.233123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.233383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.233451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.233757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.233827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.234114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.234182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.234452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.234521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.234770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.234838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.235053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.235120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.235360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.235437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.235699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.235767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.236057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.236123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.236430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.236498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.236725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.236793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.237001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.237068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.237333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.237403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.237613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.237681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.237922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.237989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.238210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.238278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.238548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.238617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.238821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.238889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.239176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.239245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.239527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.129 [2024-10-08 18:33:56.239595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.129 qpair failed and we were unable to recover it. 00:26:38.129 [2024-10-08 18:33:56.239893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.239961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.240210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.240279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.240555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.240622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.240823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.240890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.241172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.241239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.241491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.241560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.241827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.241896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.242141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.242225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.242502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.242571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.242752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.242818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.242994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.243061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.243316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.243386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.243687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.243754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.244040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.244107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.244403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.244473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.244760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.244828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.245041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.245111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.245368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.245439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.245683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.245750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.246035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.246103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.246357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.246426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.246732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.246800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.247097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.247164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.247455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.247524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.247853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.247920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.248125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.248193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.248495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.248564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.248849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.248916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.249213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.249281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.249562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.249629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.249894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.249961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.250183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.250251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.250567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.250634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.250892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.250960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.251264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.251350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.251635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.251701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.251983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.252049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.252265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.252349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.130 [2024-10-08 18:33:56.252655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.130 [2024-10-08 18:33:56.252722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.130 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.253010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.253077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.253339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.253409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.253665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.253732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.254001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.254068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.254358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.254426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.254716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.254784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.255071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.255137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.255432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.255501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.255763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.255842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.256102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.256169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.256401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.256469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.256682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.256750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.257044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.257111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.257399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.257467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.257729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.257797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.258093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.258159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.258401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.258468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.258703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.258771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.259036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.259102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.259384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.259451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.259707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.259775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.260015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.260082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.260379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.260448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.260710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.260778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.261022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.261091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.261375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.261444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.261745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.261814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.262113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.262179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.262427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.262496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.262743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.262809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.263047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.263114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.263403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.263472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.263756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.263823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.264102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.264169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.264462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.264531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.264798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.264864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.265109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.265177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.265475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.265544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.265796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.265863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.131 qpair failed and we were unable to recover it. 00:26:38.131 [2024-10-08 18:33:56.266097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.131 [2024-10-08 18:33:56.266165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.266447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.266517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.266769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.266836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.267098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.267164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.267397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.267469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.267706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.267773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.268037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.268105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.268404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.268473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.268705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.268772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.269055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.269134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.269345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.269413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.269664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.269732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.270025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.270093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.270388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.270457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.270661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.270728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.271021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.271088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.271323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.271392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.271646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.271715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.271957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.272025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.272273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.272363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.272571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.272637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.272923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.272991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.273238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.273336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.273653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.273722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.273980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.274048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.274314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.274385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.274656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.274724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.274993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.275059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.275263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.275354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.275560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.275628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.275889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.275955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.276171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.276240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.276511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.276579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.276827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.276896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.277143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.277210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.277459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.277529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.277790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.277859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.278107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.278175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.278444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.278512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.278803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.278869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.132 [2024-10-08 18:33:56.279113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.132 [2024-10-08 18:33:56.279183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.132 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.279430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.279499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.279745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.279813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.280015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.280082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.280325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.280395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.280616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.280684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.280940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.281008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.281303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.281374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.281663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.281730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.281983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.282062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.282322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.282391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.282627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.282694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.282980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.283048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.283343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.283413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.283710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.283777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.284031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.284097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.284383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.284452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.284746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.284813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.285070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.285136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.285384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.285455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.285749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.285816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.286103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.286171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.286420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.286489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.286790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.286858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.287076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.287141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.287398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.287467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.287719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.287787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.288075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.288141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.288433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.288501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.288800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.288868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.289088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.289157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.289399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.289468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.289751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.289818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.290075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.290145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.290434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.290503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.290766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.290833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.133 [2024-10-08 18:33:56.291103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.133 [2024-10-08 18:33:56.291171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.133 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.291453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.291521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.291772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.291841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.292082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.292149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.292436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.292507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.292721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.292788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.293026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.293093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.293341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.293411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.293661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.293728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.294004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.294071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.294327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.294397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.294619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.294688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.294954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.295022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.295231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.295323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.295555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.295624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.295865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.295932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.296219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.296319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.296619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.296686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.296982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.297049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.297345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.297415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.297699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.297766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.297983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.298052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.298321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.298390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.298639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.298705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.298962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.299029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.299269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.299349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.299585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.299651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.299962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.300029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.300332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.300402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.300653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.300720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.300990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.301058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.301303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.301373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.301669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.301737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.301990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.302057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.302319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.302388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.302611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.302678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.302902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.302968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.303209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.303275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.303526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.303593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.303837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.303906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.134 [2024-10-08 18:33:56.304156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.134 [2024-10-08 18:33:56.304224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.134 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.304509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.304577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.304809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.304876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.305137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.305205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.305485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.305553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.305754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.305821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.306080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.306148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.306388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.306457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.306751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.306818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.307066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.307134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.307400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.307467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.307684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.307751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.308024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.308093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.308346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.308433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.308683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.308753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.309011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.309080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.309344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.309412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.309642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.309709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.309960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.310028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.310280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.310365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.310612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.310679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.310976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.311044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.311344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.311414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.311667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.311737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.312018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.312085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.312396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.312464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.312679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.312747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.312967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.313035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.313322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.313391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.313683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.313749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.314001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.314070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.314273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.314356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.314637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.314703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.314956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.315022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.315304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.315393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.315622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.315689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.315940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.316007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.316301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.316370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.316566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.316635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.316876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.316942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.317187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.135 [2024-10-08 18:33:56.317254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.135 qpair failed and we were unable to recover it. 00:26:38.135 [2024-10-08 18:33:56.317513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.317582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.317800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.317866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.318108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.318176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.318454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.318524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.318727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.318794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.319076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.319143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.319402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.319471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.319769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.319835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.320127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.320194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.320517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.320584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.320831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.320898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.321158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.321225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.321531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.321609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.321832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.321899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.322139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.322205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.322506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.322574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.322829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.322896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.323189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.323257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.323483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.323551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.323810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.323877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.324133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.324200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.324466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.324536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.324790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.324857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.325141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.325209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.325532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.325600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.325858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.325926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.326190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.326258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.326572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.326638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.326891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.326959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.327163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.327230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.327495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.327566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.327834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.327901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.328109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.328178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.328443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.328511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.328758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.328827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.329111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.329179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.329476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.329545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.329759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.329828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.330087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.330153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.136 [2024-10-08 18:33:56.330450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.136 [2024-10-08 18:33:56.330521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.136 qpair failed and we were unable to recover it. 00:26:38.137 [2024-10-08 18:33:56.330781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-10-08 18:33:56.330849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-10-08 18:33:56.331138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-10-08 18:33:56.331205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-10-08 18:33:56.331433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-10-08 18:33:56.331501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-10-08 18:33:56.331724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-10-08 18:33:56.331792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-10-08 18:33:56.331982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-10-08 18:33:56.332050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.137 [2024-10-08 18:33:56.332303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.137 [2024-10-08 18:33:56.332372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.137 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.332621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.332689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.332980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.333048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.333317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.333387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.333629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.333698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.333967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.334035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.334240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.334345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.334632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.334712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.334967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.335039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.335319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.335399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.335694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.335768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.335988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.336056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.336350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.336421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.336729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.336796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.337022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.337096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.337346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.337423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.337671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.337740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.337964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.338046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.338305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.338384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.338683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.338757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.338979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.339047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.339317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.339388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.339664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.339738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.339953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.340022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.340302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.340371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.340664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.340732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.341003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.341071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.341340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.341415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.341636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.341705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.341969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.342038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.342274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.342369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.342559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.419 [2024-10-08 18:33:56.342595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.419 qpair failed and we were unable to recover it. 00:26:38.419 [2024-10-08 18:33:56.342747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.342784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.342926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.342964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.343108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.343145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.343283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.343327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.343471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.343508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.343623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.343659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.343822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.343858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.343971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.344053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.344330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.344391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.344502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.344539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.344718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.344786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.344994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.345062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.345335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.345372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.345518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.345564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.345848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.345917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.346161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.346239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.346443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.346479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.346641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.346711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.346943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.347010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.347208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.347275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.347476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.347512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.347706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.347773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.348037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.348105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.348384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.348422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.348596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.348632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.348869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.348936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.349221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.349332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.349507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.349543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.349811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.349847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.350031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.350067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.350332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.350368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.350510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.350545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.350789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.350859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.351060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.351129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.351400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.351437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.351583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.351620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.351765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.351820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.352111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.352180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.352418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.420 [2024-10-08 18:33:56.352455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.420 qpair failed and we were unable to recover it. 00:26:38.420 [2024-10-08 18:33:56.352572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.352611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.352754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.352788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.353008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.353078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.353399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.353452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.353687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.353762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.353986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.354059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.354361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.354398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.354517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.354565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.354838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.354907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.355123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.355197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.355460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.355498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.355689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.355759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.356026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.356095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.356366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.356404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.356550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.356590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.356732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.356765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.356982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.357071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.357375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.357415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.357563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.357600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.357834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.357904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.358203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.358240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.358394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.358430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.358689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.358759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.358979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.359048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.359313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.359392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.359652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.359722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.359960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.360033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.360311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.360379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.360643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.360713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.360936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.361005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.361274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.361367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.361660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.361728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.362020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.362087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.362361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.362432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.362705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.362773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.363033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.363101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.363402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.363472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.363773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.363842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.364151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.421 [2024-10-08 18:33:56.364219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.421 qpair failed and we were unable to recover it. 00:26:38.421 [2024-10-08 18:33:56.364484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.364554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.364860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.364930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.365151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.365219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.365491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.365562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.365825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.365895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.366195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.366264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.366591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.366661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.366864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.366936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.367228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.367314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.367553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.367621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.367909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.367979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.368278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.368365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.368596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.368664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.368961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.369030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.369283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.369369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.369631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.369700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.369990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.370057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.370326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.370397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.370693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.370762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.371014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.371083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.371353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.371423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.371673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.371744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.372038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.372107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.372391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.372461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.372752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.372821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.373061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.373129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.373391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.373460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.373680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.373751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.374041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.374110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.374343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.374413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.374683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.374753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.375033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.375103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.375354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.375424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.375653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.375723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.375979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.376047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.376269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.376362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.376585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.376656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.376898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.376966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.377261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.422 [2024-10-08 18:33:56.377364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.422 qpair failed and we were unable to recover it. 00:26:38.422 [2024-10-08 18:33:56.377617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.377687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.377951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.378021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.378244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.378335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.378589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.378657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.378891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.378959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.379265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.379364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.379644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.379713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.379939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.380007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.380309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.380379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.380642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.380710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.380989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.381058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.381285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.381373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.381644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.381712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.381999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.382067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.382357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.382428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.382710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.382779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.383050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.383117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.383403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.383471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.383723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.383791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.384086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.384156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.384390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.384462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.384753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.384821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.385066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.385136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.385400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.385470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.385758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.385827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.386125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.386193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.386461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.386533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.386789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.386857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.387106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.387173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.387417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.387488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.387772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.387841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.423 qpair failed and we were unable to recover it. 00:26:38.423 [2024-10-08 18:33:56.388104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.423 [2024-10-08 18:33:56.388173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.388464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.388535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.388755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.388824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.389073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.389142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.389414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.389486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.389781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.389849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.390135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.390204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.390531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.390601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.390886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.390956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.391206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.391275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.391551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.391619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.391916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.391985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.392235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.392317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.392571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.392639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.392890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.392969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.393242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.393344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.393605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.393673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.393870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.393939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.394231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.394318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.394615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.394683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.394968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.395036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.395231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.395336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.395606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.395676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.395918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.395985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.396267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.396362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.396642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.396711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.396997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.397064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.397363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.397433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.397752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.397820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.398071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.398141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.398350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.398421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.398669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.398737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.398940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.399009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.399215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.399299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.399569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.399639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.399904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.399973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.400234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.400317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.400613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.400681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.424 [2024-10-08 18:33:56.400965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.424 [2024-10-08 18:33:56.401034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.424 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.401338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.401408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.401705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.401774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.402079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.402146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.402441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.402511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.402809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.402877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.403098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.403166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.403428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.403497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.403761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.403830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.404123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.404191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.404440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.404509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.404795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.404863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.405052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.405121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.405337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.405407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.405670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.405740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.406028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.406096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.406354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.406434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.406688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.406758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.407058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.407127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.407337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.407406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.407671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.407738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.408024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.408091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.408348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.408418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.408644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.408711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.408998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.409066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.409324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.409393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.409691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.409760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.410018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.410087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.410371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.410441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.410677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.410745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.411025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.411094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.411340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.411409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.411650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.411719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.411941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.412010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.412306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.412375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.412617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.412686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.412913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.412981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.413274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.413372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.413621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.413691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.425 qpair failed and we were unable to recover it. 00:26:38.425 [2024-10-08 18:33:56.413905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.425 [2024-10-08 18:33:56.413973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.414269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.414357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.414577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.414646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.414936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.415004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.415330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.415400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.415599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.415668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.415913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.415984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.416236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.416318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.416621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.416688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.416978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.417047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.417262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.417345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.417589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.417657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.417913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.417981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.418275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.418355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.418595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.418663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.418907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.418975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.419276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.419401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.419685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.419773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.419998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.420066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.420354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.420424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.420674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.420743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.420997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.421067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.421322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.421393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.421685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.421754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.422010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.422079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.422378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.422447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.422711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.422779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.423022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.423090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.423345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.423416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.423670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.423740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.423953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.424022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.424323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.424392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.424626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.424694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.424957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.425028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.425276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.425368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.425624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.425692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.425941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.426009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.426277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.426 [2024-10-08 18:33:56.426364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.426 qpair failed and we were unable to recover it. 00:26:38.426 [2024-10-08 18:33:56.426652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.426719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.427022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.427119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.427511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.427589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.427855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.427925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.428199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.428269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.428560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.428630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.428939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.429008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.429278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.429365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.429619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.429689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.429918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.429987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.430308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.430377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.430641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.430709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.430967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.431035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.431333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.431403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.431620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.431687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.431907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.431974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.432233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.432313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.432571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.432640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.432927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.432995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.433261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.433364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.433673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.433741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.434028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.434097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.434365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.434434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.434725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.434793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.435047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.435115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.436405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.436440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.436631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.436682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.436905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.436960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.437107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.437137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.437298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.437327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.437513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.437574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.437657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.437687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.437902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.437960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.438068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.438098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.438247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.438275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.438390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.438422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.438552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.427 [2024-10-08 18:33:56.438606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.427 qpair failed and we were unable to recover it. 00:26:38.427 [2024-10-08 18:33:56.438781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.438843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.438967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.438996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.439119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.439147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.439262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.439299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.439450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.439479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.439598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.439626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.439752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.439782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.439912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.439940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.440040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.440070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.440203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.440232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.440381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.440411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.440561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.440591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.440687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.440716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.440800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.440830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.440950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.440980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.441082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.441112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.441229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.441259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.441392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.441423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.441550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.441580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.441680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.441709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.441794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.441823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.441945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.441974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.442080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.442114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.442259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.442293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.442412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.442442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.442571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.442600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.442749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.442778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.442906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.442934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.443022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.443051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.443144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.443173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.443273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.443317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.428 qpair failed and we were unable to recover it. 00:26:38.428 [2024-10-08 18:33:56.443440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.428 [2024-10-08 18:33:56.443469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.443649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.443720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.443882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.443912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.444034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.444063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.445398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.445431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.445562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.445590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.445693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.445719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.445841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.445868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.445955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.445982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.446110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.446138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.446232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.446260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.446392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.446418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.446518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.446545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.446632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.446658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.446750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.446778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.446865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.446893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.447009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.447038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.447148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.447176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.447267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.447303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.447429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.447457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.447574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.447602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.447699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.447730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.447869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.447897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.448052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.448080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.448174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.448202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.448301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.448339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.448459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.448487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.448592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.448620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.448743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.448771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.448908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.448936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.449078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.449105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.449242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.449276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.449403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.449432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.449515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.449554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.449672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.449699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.449817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.449845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.429 qpair failed and we were unable to recover it. 00:26:38.429 [2024-10-08 18:33:56.449962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.429 [2024-10-08 18:33:56.449992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.450118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.450146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.450302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.450332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.450459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.450486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.450592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.450620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.450743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.450770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.450860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.450897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.451017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.451045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.451175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.451203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.451301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.451333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.451424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.451453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.451546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.451574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.451663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.451692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.451816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.451845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.451983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.452011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.452128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.452157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.452263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.452298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.452416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.452445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.452558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.452587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.452668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.452695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.452836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.452864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.452984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.453012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.453133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.453160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.453263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.453296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.453405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.453432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.453519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.453547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.453671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.453700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.453792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.453821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.453961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.453989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.454112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.454141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.454255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.454284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.454385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.454414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.454528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.454557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.454644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.454673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.454770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.454798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.454938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.454975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.455082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.455111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.455237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.455266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.430 [2024-10-08 18:33:56.455375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.430 [2024-10-08 18:33:56.455404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.430 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.455499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.455527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.455679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.455706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.455823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.455851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.455999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.456028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.456121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.456148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.456263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.456297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.456413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.456441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.456557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.456588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.456702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.456730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.456875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.456903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.457037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.457065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.457154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.457183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.457313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.457352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.457436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.457465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.457582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.457611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.457690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.457719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.457855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.457884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.457966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.457994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.458115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.458143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.458264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.458312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.458415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.458443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.458574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.458603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.458724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.458754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.458876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.458905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.459006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.459033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.459151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.459179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.459301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.459330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.459474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.459503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.459590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.459620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.459740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.459767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.459896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.459924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.460042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.460069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.460214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.460244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.460378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.460435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.460577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.460641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.460758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.460787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.460907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.431 [2024-10-08 18:33:56.460940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.431 qpair failed and we were unable to recover it. 00:26:38.431 [2024-10-08 18:33:56.461031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.461060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.461148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.461177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.461270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.461306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.461406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.461434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.461520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.461553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.461670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.461699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.461816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.461844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.461926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.461955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.462042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.462077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.462198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.462226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.462332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.462361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.462455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.462483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.462628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.462667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.462786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.462815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.462933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.462961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.463046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.463075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.463173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.463202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.463300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.463341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.463450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.463479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.463598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.463627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.463742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.463771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.463866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.463896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.463973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.464001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.464097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.464125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.464209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.464238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.464331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.464358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.464454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.464482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.464577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.464606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.464682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.464710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.464803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.464831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.464957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.464986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.465076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.465103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.465220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.465247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.465349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.432 [2024-10-08 18:33:56.465376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.432 qpair failed and we were unable to recover it. 00:26:38.432 [2024-10-08 18:33:56.465464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.465492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.465602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.465631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.465743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.465771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.465853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.465881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.465969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.465997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.466078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.466109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.466203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.466230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.466340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.466367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.466481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.466509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.466624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.466652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.466761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.466790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.466907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.466935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.467036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.467065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.467148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.467175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.467268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.467304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.467397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.467425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.467536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.467574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.467658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.467687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.467836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.467864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.467964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.467993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.468094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.468122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.468215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.468244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.468379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.468408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.468522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.468560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.468702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.468731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.468825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.468854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.468985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.469015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.469100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.469127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.469242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.469271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.469390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.469419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.469509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.469538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.469627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.469654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.469750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.469779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.469903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.469933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.470032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.470061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.470176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.470205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.470341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.470371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.470456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.433 [2024-10-08 18:33:56.470484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.433 qpair failed and we were unable to recover it. 00:26:38.433 [2024-10-08 18:33:56.470607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.470636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.470716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.470743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.470850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.470879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.470959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.470986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.471107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.471136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.471229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.471257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.471361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.471388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.471532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.471573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.471685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.471714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.471806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.471834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.471975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.472019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.472119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.472149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.472274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.472311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.472447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.472476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.472597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.472626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.472743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.472774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.472898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.472927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.473047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.473075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.473207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.473248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.473373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.473403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.473516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.473572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.473764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.473820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.473914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.473942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.474041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.474069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.474146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.474184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.474306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.474347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.474471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.474499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.474601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.474631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.474715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.474744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.474841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.474870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.474974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.475003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.475123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.475152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.475252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.475282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.475415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.475444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.475567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.475599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.475697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.475726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.475838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.475867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.476013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.434 [2024-10-08 18:33:56.476042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.434 qpair failed and we were unable to recover it. 00:26:38.434 [2024-10-08 18:33:56.476128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.476157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.476246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.476275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.476418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.476448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.476546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.476575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.476693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.476722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.476809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.476837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.476955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.476985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.477104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.477132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.477971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.478014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.478138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.478167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.478302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.478343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.478444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.478474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.478570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.478599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.478703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.478731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.478815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.478843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.478985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.479013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.479104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.479133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.479257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.479304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.479395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.479424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.479544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.479572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.479688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.479730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.479863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.479894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.479984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.480012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.480143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.480172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.480319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.480371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.480487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.480515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.480626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.480663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.480786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.480814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.480903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.480929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.481044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.481072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.481177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.481204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.481300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.481349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.481505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.481554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.481701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.481749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.481930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.481978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.482159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.482191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.482284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.482335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.482448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.435 [2024-10-08 18:33:56.482504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.435 qpair failed and we were unable to recover it. 00:26:38.435 [2024-10-08 18:33:56.482658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.482710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.482817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.482862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.482986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.483013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.483153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.483181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.483306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.483334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.483447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.483475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.483571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.483599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.483679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.483707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.483825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.483853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.483962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.483989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.484086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.484114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.484231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.484257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.484385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.484416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.484510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.484538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.484646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.484674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.484791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.484820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.484906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.484933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.485029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.485057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.485159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.485186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.485295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.485346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.485473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.485502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.485627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.485655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.485784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.485828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.485964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.486010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.486148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.486194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.486366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.486394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.486511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.486564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.486709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.486766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.486974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.487028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.487141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.487170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.487299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.487384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.487561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.487615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.487883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.487926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.488116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.488159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.488300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.488356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.488499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.488527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.488648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.488691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.488865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.488908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.436 qpair failed and we were unable to recover it. 00:26:38.436 [2024-10-08 18:33:56.489043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.436 [2024-10-08 18:33:56.489093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.489222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.489251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.489380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.489408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.489517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.489581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.489692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.489746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.489898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.489957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.490077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.490105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.490201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.490228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.490324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.490360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.490476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.490503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.490627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.490656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.490740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.490767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.490884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.490913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.491013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.491040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.491163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.491190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.491319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.491357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.491444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.491472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.491561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.491589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.491708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.491736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.491861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.491901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.491993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.492020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.492107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.492134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.492246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.492273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.492395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.492423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.492537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.492564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.492649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.492676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.492766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.492794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.492892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.492919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.493040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.493067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.493176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.493215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.497407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.497450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.497580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.497619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.497751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.497807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.497975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.498036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.498153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.498183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.437 qpair failed and we were unable to recover it. 00:26:38.437 [2024-10-08 18:33:56.498381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.437 [2024-10-08 18:33:56.498410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.498524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.498552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.498646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.498672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.498772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.498800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.498920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.498947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.499037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.499071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.499221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.499249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.499364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.499394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.499534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.499566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.499652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.499681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.499774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.499811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.499897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.499924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.500045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.500085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.500177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.500204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.500344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.500372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.500453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.500481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.500577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.500605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.500698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.500725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.500809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.500834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.500950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.500977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.501092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.501120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.501199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.501226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.501345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.501374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.501481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.501508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.501598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.501626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.501754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.501795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.501922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.501963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.502074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.502103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.502225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.502253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.502392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.502421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.502540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.502573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.502746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.502797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.503008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.503057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.503251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.503316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.503449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.503477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.503574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.503602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.503742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.503800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.504042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.504093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.504357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.504385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.504525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.504562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.438 qpair failed and we were unable to recover it. 00:26:38.438 [2024-10-08 18:33:56.504717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.438 [2024-10-08 18:33:56.504784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.505052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.505092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.505254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.505325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.505496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.505524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.505660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.505696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.505817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.505879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.506095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.506134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.506301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.506351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.506468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.506496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.506590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.506618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.506762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.506807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.506972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.507012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.507145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.507193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.507414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.507443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.507558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.507597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.507862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.507902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.508082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.508140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.508347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.508376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.508487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.508515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.508605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.508637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.508789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.508817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.509009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.509074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.509258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.509302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.509448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.509476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.509587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.509615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.509728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.509755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.509924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.509983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.510174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.510225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.510403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.510431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.510513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.510542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.510697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.510725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.510866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.510918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.511089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.511141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.511312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.511350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.511466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.511495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.511596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.511647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.511775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.511824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.512064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.512116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.512344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.512373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.439 [2024-10-08 18:33:56.512465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.439 [2024-10-08 18:33:56.512494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.439 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.512615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.512643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.512835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.512887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.513041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.513094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.513261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.513304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.513437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.513464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.513590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.513622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.513715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.513760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.513878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.513915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.514102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.514153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.514327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.514379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.514527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.514587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.514817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.514868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.515027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.515103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.515357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.515387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.515477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.515505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.515645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.515673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.515804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.515872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.516018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.516079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.516267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.516312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.516432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.516462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.516614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.516642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.516757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.516809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.516950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.517013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.517191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.517219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.517341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.517369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.517491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.517520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.517634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.517674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.517825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.517861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.518031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.518067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.518208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.518245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.518482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.518519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.518718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.518758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.518879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.518919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.519123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.519189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.519424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.440 [2024-10-08 18:33:56.519461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.440 qpair failed and we were unable to recover it. 00:26:38.440 [2024-10-08 18:33:56.519662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.519710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.519949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.520000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.520202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.520254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.520448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.520485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.520605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.520653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.520843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.520915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.521102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.521151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.521341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.521379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.521528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.521571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.521754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.521819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.521944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.522004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.522208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.522261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.522433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.522469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.522611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.522655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.522821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.522869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.523057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.523107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.523297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.523338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.523476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.523513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.523699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.523748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.524005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.524045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.524170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.524209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.524408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.524446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.524618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.524654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.524854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.524905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.525175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.525215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.525387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.525424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.525568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.525635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.525894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.525934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.526077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.526133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.526378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.526415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.526564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.526621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.526815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.526851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.526995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.527030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.527203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.527243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.527417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.527454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.441 [2024-10-08 18:33:56.527655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.441 [2024-10-08 18:33:56.527703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.441 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.527884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.527932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.528114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.528167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.528395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.528431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.528537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.528614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.528781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.528817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.528928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.528964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.529104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.529153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.529332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.529369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.529516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.529563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.529696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.529731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.529871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.529908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.530054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.530094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.530280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.530361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.530482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.530518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.530655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.530697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.530887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.530927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.531057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.531112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.531348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.531386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.531529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.531566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.531900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.531948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.532126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.532192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.532409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.532450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.532601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.532640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.532797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.532837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.533055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.533103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.533312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.533375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.533618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.533687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.533896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.533945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.534109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.534158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.534311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.534374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.534527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.534576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.534766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.534834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.535069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.535117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.535317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.535366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.535514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.535560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.535730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.535766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.535903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.535939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.536109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.536149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.536318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.536365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.442 [2024-10-08 18:33:56.536533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.442 [2024-10-08 18:33:56.536586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.442 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.536787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.536823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.536990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.537039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.537240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.537298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.537470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.537501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.537637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.537668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.537822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.537853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.537971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.538029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.538206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.538237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.538398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.538429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.538531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.538562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.538728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.538779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.538987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.539038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.539238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.539298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.539430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.539460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.539586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.539642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.539811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.539863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.540022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.540087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.540272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.540311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.540437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.540468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.540611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.540643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.540764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.540796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.540951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.540994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.541119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.541150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.541268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.541304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.541443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.541474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.541620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.541682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.541831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.541895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.542140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.542192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.542437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.542468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.542564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.542629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.542783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.542835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.542982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.543043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.543280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.543363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.543498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.543529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.543659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.543690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.543837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.543873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.544015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.544049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.544207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.544241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.544386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.544418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.443 qpair failed and we were unable to recover it. 00:26:38.443 [2024-10-08 18:33:56.544524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.443 [2024-10-08 18:33:56.544555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.544685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.544718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.544885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.544919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.545020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.545054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.545155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.545187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.545328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.545360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.545459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.545490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.545579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.545622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.545766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.545799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.545920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.545953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.546114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.546148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.546278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.546333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.546486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.546531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.546716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.546751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.546905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.546936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.547070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.547120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.547258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.547313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.547445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.547476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.547589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.547640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.547769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.547821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.547966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.548002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.548136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.548184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.548338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.548384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.548520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.548565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.548784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.548833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.548961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.548997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.549124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.549158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.549306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.549343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.549464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.549496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.549628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.549676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.549809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.549859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.549995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.550028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.550162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.550195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.550311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.550348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.550476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.550506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.550663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.550697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.550829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.550863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.551036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.551089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.551269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.551307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.444 qpair failed and we were unable to recover it. 00:26:38.444 [2024-10-08 18:33:56.551452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.444 [2024-10-08 18:33:56.551482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.551583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.551614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.551718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.551768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.551905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.551976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.552098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.552129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.552276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.552345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.552453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.552484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.552574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.552623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.552730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.552764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.552870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.552903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.553036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.553069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.553212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.553245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.553397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.553442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.553564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.553626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.553725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.553757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.553904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.553954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.554043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.554072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.554205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.554237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.554372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.554422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.554522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.554561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.554695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.554726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.554817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.554846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.554945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.554976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.555076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.555107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.555257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.555300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.555426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.555458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.555615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.555660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.555769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.555802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.555933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.555965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.556068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.556099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.556212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.556257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.556381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.556416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.556531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.556587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.556699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.556745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.556915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.556966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.557065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.557096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.557222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.557253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.557376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.557408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.557514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.557545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.445 qpair failed and we were unable to recover it. 00:26:38.445 [2024-10-08 18:33:56.557681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.445 [2024-10-08 18:33:56.557711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.557834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.557865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.557991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.558022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.558145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.558177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.558283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.558338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.558433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.558464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.558572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.558603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.558707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.558736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.558864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.558895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.558995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.559026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.559149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.559180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.559317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.559359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.559452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.559483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.559583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.559614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.559735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.559767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.559862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.559894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.560046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.560077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.560213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.560244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.560361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.560393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.560491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.560522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.560656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.560701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.560804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.560837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.560934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.560966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.561087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.561118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.561225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.561270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.561406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.561441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.561540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.561598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.561767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.561803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.561996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.562045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.562202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.562233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.562352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.562383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.562484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.562521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.562713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.562749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.562882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.562932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.563060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.563111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.563272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.446 [2024-10-08 18:33:56.563340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.446 qpair failed and we were unable to recover it. 00:26:38.446 [2024-10-08 18:33:56.563439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.563471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.563591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.563625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.563784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.563833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.564006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.564056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.564157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.564189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.564304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.564351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.564454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.564486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.564608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.564639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.564769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.564817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.564933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.564983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.565116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.565167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.565293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.565336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.565442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.565472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.565601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.565637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.565795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.565830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.565955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.566005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.566132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.566180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.566306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.566337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.566438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.566469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.566599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.566633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.566833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.566882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.567067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.567103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.567228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.567268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.567404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.567435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.567531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.567585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.567744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.567778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.567915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.567965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.568107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.568142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.568269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.568315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.568427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.568474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.568576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.568627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.568780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.568816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.568950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.568985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.569112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.569148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.569268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.569326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.569429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.569474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.569619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.569669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.569784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.569832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.569984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.447 [2024-10-08 18:33:56.570014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.447 qpair failed and we were unable to recover it. 00:26:38.447 [2024-10-08 18:33:56.570145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.570177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.570316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.570348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.570477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.570509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.570603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.570634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.570757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.570788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.570914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.570945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.571064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.571095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.571187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.571217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.571344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.571376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.571476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.571508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.571650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.571707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.571904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.571954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.572049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.572080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.572203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.572234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.572367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.572416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.572515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.572545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.572671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.572702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.572822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.572854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.572953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.572995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.573127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.573157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.573344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.573378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.573471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.573504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.573664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.573716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.573876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.573928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.574062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.574093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.574184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.574215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.574318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.574350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.574472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.574520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.574664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.574717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.574872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.574909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.575110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.575146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.575324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.575380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.575525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.575558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.575729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.575778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.575985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.576020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.576164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.576196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.576346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.576377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.576524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.576574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.576687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.576735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.576885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.448 [2024-10-08 18:33:56.576936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.448 qpair failed and we were unable to recover it. 00:26:38.448 [2024-10-08 18:33:56.577055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.577086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.577186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.577217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.577337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.577367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.577482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.577512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.577645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.577676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.577803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.577834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.577960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.577991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.578107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.578152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.578320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.578354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.578474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.578504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.578644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.578675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.578804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.578865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.578996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.579032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.579203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.579234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.579372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.579404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.579500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.579530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.579680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.579729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.579910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.579943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.580101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.580150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.580282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.580342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.580474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.580508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.580657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.580709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.580866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.580921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.581059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.581110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.581214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.581246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.581380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.581412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.581542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.581573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.581701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.581731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.581870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.581901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.582027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.582059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.582186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.582217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.582312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.582344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.582454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.582490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.582656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.582686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.582783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.582814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.582930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.582981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.583106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.583136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.583263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.583306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.583407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.583455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.449 [2024-10-08 18:33:56.583580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.449 [2024-10-08 18:33:56.583615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.449 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.583835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.583871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.584039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.584075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.584211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.584247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.584420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.584451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.584543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.584590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.584731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.584767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.584964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.585000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.585140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.585175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.585338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.585369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.585498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.585529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.585694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.585729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.585905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.585941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.586140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.586175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.586327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.586375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.586495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.586526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.586625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.586656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.586788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.586818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.586981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.587017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.587173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.587204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.587338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.587369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.587461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.587493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.587634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.587670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.587834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.587869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.588000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.588035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.588174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.588210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.588372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.588403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.588525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.588555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.588802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.588858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.589007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.589043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.589189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.589224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.589366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.589399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.589532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.589563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.589717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.589753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.589942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.589978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.590088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.450 [2024-10-08 18:33:56.590124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.450 qpair failed and we were unable to recover it. 00:26:38.450 [2024-10-08 18:33:56.590284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.590351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.590454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.590484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.590609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.590639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.590794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.590830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.591000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.591036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.591212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.591247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.591430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.591461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.591618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.591647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.591741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.591795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.591962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.591997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.592098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.592133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.592254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.592297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.592444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.592472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.592581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.592610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.592736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.592764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.592926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.592970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.593148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.593181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.593376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.593405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.593577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.593612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.593734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.593780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.593923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.593957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.594101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.594135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.594304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.594358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.594475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.594508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.594668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.594721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.594858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.594910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.595032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.595083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.595180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.595212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.595362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.595394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.595490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.595519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.595643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.595678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.595809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.595841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.595966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.595996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.596146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.596176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.596277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.596317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.596429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.596460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.596604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.596650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.596753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.596786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.596938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.596969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.451 [2024-10-08 18:33:56.597117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.451 [2024-10-08 18:33:56.597153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.451 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.597253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.597299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.597418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.597447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.597532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.597572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.597718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.597766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.597915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.597950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.598090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.598124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.598260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.598305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.598444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.598473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.598569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.598608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.598728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.598758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.598912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.598946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.599116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.599152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.599331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.599370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.599465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.599495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.599619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.599650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.599775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.599823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.599990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.600025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.600226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.600269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.600421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.600450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.600547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.600581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.600729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.600763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.600906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.600941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.601085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.601121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.601299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.601339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.601428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.601458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.601584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.601615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.601727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.601772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.601955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.601991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.602140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.602176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.602316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.602349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.602501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.602531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.602737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.602805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.602999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.603034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.603214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.603245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.603373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.603404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.603548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.603580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.603771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.603834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.603949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.603985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.604110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.452 [2024-10-08 18:33:56.604158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.452 qpair failed and we were unable to recover it. 00:26:38.452 [2024-10-08 18:33:56.604303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.604350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.604454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.604484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.604589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.604618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.604744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.604774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.604962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.604998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.605192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.605233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.605383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.605413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.605538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.605590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.605758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.605789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.605891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.605921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.606098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.606133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.606285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.606324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.606417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.606447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.606601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.606636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.606827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.606862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.607028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.607059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.607206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.607241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.607398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.607429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.607550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.607597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.607773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.607808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.607965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.608000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.608167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.608202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.608338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.608369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.608454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.608484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.608602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.608631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.608814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.608862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.609093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.609128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.609267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.609319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.609424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.609455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.609605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.609634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.609720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.609765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.609930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.609965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.610133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.610172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.610367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.610397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.610525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.610580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.610747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.610818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.610955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.610990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.611124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.611158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.611298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.611349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.611484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.611517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.453 [2024-10-08 18:33:56.611661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.453 [2024-10-08 18:33:56.611693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.453 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.611840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.611894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.612078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.612128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.612306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.612355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.612544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.612576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.612704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.612735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.612871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.612902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.613029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.613061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.613156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.613186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.613327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.613357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.613472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.613507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.613723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.613785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.613992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.614052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.614172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.614201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.614300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.614338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.614431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.614460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.614582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.614616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.614714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.614749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.614864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.614899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.615086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.615149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.615245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.615276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.615435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.615466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.615609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.615658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.615802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.615852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.615998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.616048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.616201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.616233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.616350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.616381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.616486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.616520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.616671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.616707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.616905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.616965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.617109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.617139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.617299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.617339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.617473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.617503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.617766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.617852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.618132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.618206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.618421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.618454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.618551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.618583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.618884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.618944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.619174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.619210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.619390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.619423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.454 [2024-10-08 18:33:56.619545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.454 [2024-10-08 18:33:56.619585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.454 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.619732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.619762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.619995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.620052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.620206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.620235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.620367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.620398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.620521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.620555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.620743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.620777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.620918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.620970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.621101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.621136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.621305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.621338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.621427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.621457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.621552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.621592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.621742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.621776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.621921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.621956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.622095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.622130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.622272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.622340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.622476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.622509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.622686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.622738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.622957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.623018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.623146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.623177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.623347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.623378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.623508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.623555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.623744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.623813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.623984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.624052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.624170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.624200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.624350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.624381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.624497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.624527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.624689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.624724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.624853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.624899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.625047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.625081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.625254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.625284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.625422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.625454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.625585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.625622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.625812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.625852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.625999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.626033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.626174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.626210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.626397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.455 [2024-10-08 18:33:56.626443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.455 qpair failed and we were unable to recover it. 00:26:38.455 [2024-10-08 18:33:56.626605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.626638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.626755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.626806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.626954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.627005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.627109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.627141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.627301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.627341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.627518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.627556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.627728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.627762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.627899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.627933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.628052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.628098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.628224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.628254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.628392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.628422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.628567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.628602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.628770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.628805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.628943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.628978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.629128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.629158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.629284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.629336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.629483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.629512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.629671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.629729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.629879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.629934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.630057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.630108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.630228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.630260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.630413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.630465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.630579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.630610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.630732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.630779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.630912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.630943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.631056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.631086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.631209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.631239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.631375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.631406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.631503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.631535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.631675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.631711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.631861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.631910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.632011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.632047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.632198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.632227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.632352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.632383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.632480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.632512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.632663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.632693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.632782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.632831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.632969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.633004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.633147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.633182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.633295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.456 [2024-10-08 18:33:56.633345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.456 qpair failed and we were unable to recover it. 00:26:38.456 [2024-10-08 18:33:56.633430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.633461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.633587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.633620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.633784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.633833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.633952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.633988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.634134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.634165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.634314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.634346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.634493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.634546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.634753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.634807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.635004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.635059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.635213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.635244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.635406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.635465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.635644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.635703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.635882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.635932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.636057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.636089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.636243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.636274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.636409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.636439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.636563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.636594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.636689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.636720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.636877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.636927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.637077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.637108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.637259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.637301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.637463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.637514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.637665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.637715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.637859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.637908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.638014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.638045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.638182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.638213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.638325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.638356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.638460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.638491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.638654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.638685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.638835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.638865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.638988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.639020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.639136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.639167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.639348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.639402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.639548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.639597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.639777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.639829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.639950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.639982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.640112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.640144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.640277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.640318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.640416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.640447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.640585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.457 [2024-10-08 18:33:56.640647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.457 qpair failed and we were unable to recover it. 00:26:38.457 [2024-10-08 18:33:56.640775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.640805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.640932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.640963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.641086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.641117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.641257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.641311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.641440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.641473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.641609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.641641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.641791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.641821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.641961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.641997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.642167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.642203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.642363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.642395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.642486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.642541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.642647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.642683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.642848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.642884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.643069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.643132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.643267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.643321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.643473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.643503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.643608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.643659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.643832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.643868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.644003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.644038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.644207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.644242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.644403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.644434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.644577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.644613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.644759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.644795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.644969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.645005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.645187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.645223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.645358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.645388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.645512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.645543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.645704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.645740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.645907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.645942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.646039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.646075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.646227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.646260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.646381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.646413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.646569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.646619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.646753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.646803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.647004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.647069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.647172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.647203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.647338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.647369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.647490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.647529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.647669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.647700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.647855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.458 [2024-10-08 18:33:56.647885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.458 qpair failed and we were unable to recover it. 00:26:38.458 [2024-10-08 18:33:56.648044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.648075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.648197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.648228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.648354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.648385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.648512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.648543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.648664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.648696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.648841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.648872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.648970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.649002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.649129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.649160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.649310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.649354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.649479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.649510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.649637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.649668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.649789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.649821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.649982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.650013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.650143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.650174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.650308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.650340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.650457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.650508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.650625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.650655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.650778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.650809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.650935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.650966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.651062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.651096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.651193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.651225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.651346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.651378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.651508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.651538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.651663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.651694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.651846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.651878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.652029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.652059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.652161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.652192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.652296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.652329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.652420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.652451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.652575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.652607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.652739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.652770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.652921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.652952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.653078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.653109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.653201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.653232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.653339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.653368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.653458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.459 [2024-10-08 18:33:56.653488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.459 qpair failed and we were unable to recover it. 00:26:38.459 [2024-10-08 18:33:56.653616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.653646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.653773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.653809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.653912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.653943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.654068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.654098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.654195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.654226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.654358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.654389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.654517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.654548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.654698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.654729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.654827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.654858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.654981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.655012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.655135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.655166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.655300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.655351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.655488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.655520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.655661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.655693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.655816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.655848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.655979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.656009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.656115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.656147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.656338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.656377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.656536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.656594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.656766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.656815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.656929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.656960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.657055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.657086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.657213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.657244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.657404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.657456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.657643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.657694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.657838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.657896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.658002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.658033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.658138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.658169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.658308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.658353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.658480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.658511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.658636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.658667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.658796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.658827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.658984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.659016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.659164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.659195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.659302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.659334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.659460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.659491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.659585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.659615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.659729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.659760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.659888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.659919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.460 [2024-10-08 18:33:56.660038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.460 [2024-10-08 18:33:56.660069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.460 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.660190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.660221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.660379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.660425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.660571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.660605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.660760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.660791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.660906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.660937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.661060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.661090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.661222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.661254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.661392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.661425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.661581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.661612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.661763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.661794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.661973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.662024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.662174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.662204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.662355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.662405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.662547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.662584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.662728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.662764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.662918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.662954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.663121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.663155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.663309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.663345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.663467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.663496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.663643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.663678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.663792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.663827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.663969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.664019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.664164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.664194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.664357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.664388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.664517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.664557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.664775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.664811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.664925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.664960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.665079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.665115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.665261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.665297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.665442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.665471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.665597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.665627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.665759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.665794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.665964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.665999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.666142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.666177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.666348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.666393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.666531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.666572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.666750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.666800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.666948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.666997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.667101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.667132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.667238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.667269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.667392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.667424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.461 [2024-10-08 18:33:56.667562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.461 [2024-10-08 18:33:56.667593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.461 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.667727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.667758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.667908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.667938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.668080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.668115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.668261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.668304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.668448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.668478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.668613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.668662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.668810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.668846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.669067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.669103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.669243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.669279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.669439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.669469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.669650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.669685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.669798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.669834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.669956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.669991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.670178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.670214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.670378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.670410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.670523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.670554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.670702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.670737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.670885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.670921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.671087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.671122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.671294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.671350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.671485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.671518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.671658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.671691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.671808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.671860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.672011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.672062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.672157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.672188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.672350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.672383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.672498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.672529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.672627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.672659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.672812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.672844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.672973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.673004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.673131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.673163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.673285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.673334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.673437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.673467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.673578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.673610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.673737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.673769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.673903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.673934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.674036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.674067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.674200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.674244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.674397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.674430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.674567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.674599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.674729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.674765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.674922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.674952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.462 [2024-10-08 18:33:56.675076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.462 [2024-10-08 18:33:56.675106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.462 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.675223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.675254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.675374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.675404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.675505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.675535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.675673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.675709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.675822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.675857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.676000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.676036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.676159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.676192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.676351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.676382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.676525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.676583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.676726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.676777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.676899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.676950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.677086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.677117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.677266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.677304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.677468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.677499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.677597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.677627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.677775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.677806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.677934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.677965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.678091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.678121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.678272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.678310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.678402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.678432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.678529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.678568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.678721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.678757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.678943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.679011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.679208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.679259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.679435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.679480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.679703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.679810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.680105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.680142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.680278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.680338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.680467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.680498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.680634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.680716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.681009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.681085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.681295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.681338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.681443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.681474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.681568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.681602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.681754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.681790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.681938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.681973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.682205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.682241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.682423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.463 [2024-10-08 18:33:56.682454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.463 qpair failed and we were unable to recover it. 00:26:38.463 [2024-10-08 18:33:56.682561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.464 [2024-10-08 18:33:56.682600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.464 qpair failed and we were unable to recover it. 00:26:38.464 [2024-10-08 18:33:56.682740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.464 [2024-10-08 18:33:56.682779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.464 qpair failed and we were unable to recover it. 00:26:38.464 [2024-10-08 18:33:56.682927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.464 [2024-10-08 18:33:56.682966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.464 qpair failed and we were unable to recover it. 00:26:38.464 [2024-10-08 18:33:56.683105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.464 [2024-10-08 18:33:56.683140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.464 qpair failed and we were unable to recover it. 00:26:38.464 [2024-10-08 18:33:56.683254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.741 [2024-10-08 18:33:56.683300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.741 qpair failed and we were unable to recover it. 00:26:38.741 [2024-10-08 18:33:56.683427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.683457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.683540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.683581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.683684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.683716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.683863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.683900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.684034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.684069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.684313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.684375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.684528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.684568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.684694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.684725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.684835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.684871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.685115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.685183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.685431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.685468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.685579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.685609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.685726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.685768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.685876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.685913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.686135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.686203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.686347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.686383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.686524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.686596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.686750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.686810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.686964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.686996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.687088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.687120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.687270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.687314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.687445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.687477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.687622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.687653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.687794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.687825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.687972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.688028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.688129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.688160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.688301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.688341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.688463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.688494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.688632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.688663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.688801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.688832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.688925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.688955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.689077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.689108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.689199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.689230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.689393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.689425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.689550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.689581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.742 [2024-10-08 18:33:56.689686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.742 [2024-10-08 18:33:56.689717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.742 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.689868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.689899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.690018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.690049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.690199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.690229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.690358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.690391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.690521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.690561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.690688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.690719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.690849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.690880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.690977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.691008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.691111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.691156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.691292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.691324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.691416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.691445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.691680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.691748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.692014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.692091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.692351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.692383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.692505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.692535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.692640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.692670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.692770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.692801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.692995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.693030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.693135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.693172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.693304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.693353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.693446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.693479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.693599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.693650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.693858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.693915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.694004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.694034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.694184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.694215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.694321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.694352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.694481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.694513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.694631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.694662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.694775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.694818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.694983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.695023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.695145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.695175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.695269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.695308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.695402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.695435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.695562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.695594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.695724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.695755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.695936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.696003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.696274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.696351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.696452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.696481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.696623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.743 [2024-10-08 18:33:56.696658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.743 qpair failed and we were unable to recover it. 00:26:38.743 [2024-10-08 18:33:56.696807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.696843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.696978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.697063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.697244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.697275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.697411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.697442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.697563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.697593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.697730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.697799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.698057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.698123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.698358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.698390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.698540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.698571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.698743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.698779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.698916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.698985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.699158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.699190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.699312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.699343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.699469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.699505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.699635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.699665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.699827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.699895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.700075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.700142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.700353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.700385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.700511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.700542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.700779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.700846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.701142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.701178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.701325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.701374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.701538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.701596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.701844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.701910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.702177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.702244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.702415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.702446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.702582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.702612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.702714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.702764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.702877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.702913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.703218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.703285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.703465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.703496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.703647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.703725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.703972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.704038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.704245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.704281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.704435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.704467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.704594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.704625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.704747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.704797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.704998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.705066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.744 qpair failed and we were unable to recover it. 00:26:38.744 [2024-10-08 18:33:56.705256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.744 [2024-10-08 18:33:56.705301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.705439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.705470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.705626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.705687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.705890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.705921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.706071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.706135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.706383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.706414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.706546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.706576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.706703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.706765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.707032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.707099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.707309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.707340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.707465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.707496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.707692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.707758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.708069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.708145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.708403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.708472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.708677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.708745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.708949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.709029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.709336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.709404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.709702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.709770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.710099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.710174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.710501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.710571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.710862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.710929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.711168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.711235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.711490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.711527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.711651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.711686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.711887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.711954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.712255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.712343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.712563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.712631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.712820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.712887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.713138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.713206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.713504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.713573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.713812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.713879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.714132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.714202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.714512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.714581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.714852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.714888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.715026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.715062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.715323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.745 [2024-10-08 18:33:56.715391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.745 qpair failed and we were unable to recover it. 00:26:38.745 [2024-10-08 18:33:56.715679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.715747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.716034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.716070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.716242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.716323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.716578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.716644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.716861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.716928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.717148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.717216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.717466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.717534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.717817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.717884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.718172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.718207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.718354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.718400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.718612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.718679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.718884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.718950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.719197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.719266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.719511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.719582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.719824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.719892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.720162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.720229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.720537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.720606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.720831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.720897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.721158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.721225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.721542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.721584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.721733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.721769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.722069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.722136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.722350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.722418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.722698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.722765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.723015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.723085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.723381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.723450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.723740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.723806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.724050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.724119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.724406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.724474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.724723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.724790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.725079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.725146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.746 [2024-10-08 18:33:56.725395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.746 [2024-10-08 18:33:56.725464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.746 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.725710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.725781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.726038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.726074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.726183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.726219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.726500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.726568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.726816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.726884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.727153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.727220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.727487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.727555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.727819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.727886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.728183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.728218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.728331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.728366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.728579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.728646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.728909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.728976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.729223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.729305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.729572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.729640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.729869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.729938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.730184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.730250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.730563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.730630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.730835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.730903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.731161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.731229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.731553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.731626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.731917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.731984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.732233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.732320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.732567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.732634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.732892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.732959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.733228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.733264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.733442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.733510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.733788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.733856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.734097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.734177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.734440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.734509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.734795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.734831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.734959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.734995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.735195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.735261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.735543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.735611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.735832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.735899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.736141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.736176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.736326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.747 [2024-10-08 18:33:56.736364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.747 qpair failed and we were unable to recover it. 00:26:38.747 [2024-10-08 18:33:56.736571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.736638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.736892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.736958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.737201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.737268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.737580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.737648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.737942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.738010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.738322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.738358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.738467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.738501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.738740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.738807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.739066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.739133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.739375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.739443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.739733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.739801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.740054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.740124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.740417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.740485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.740785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.740853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.741141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.741208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.741481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.741550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.741741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.741809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.742070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.742139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.742373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.742441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.742689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.742757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.743069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.743136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.743434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.743501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.743737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.743803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.744092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.744160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.744412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.744448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.744574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.744610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.744837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.744905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.745206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.745273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.745554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.745621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.745863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.745931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.746222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.746302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.746535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.746609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.746856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.746922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.747219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.747304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.747627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.747695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.747987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.748065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.748221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.748260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.748455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.748495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.748 [2024-10-08 18:33:56.748676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.748 [2024-10-08 18:33:56.748714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.748 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.748883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.748944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.749156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.749206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.749424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.749475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.749672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.749728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.749897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.749932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.750057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.750091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.750247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.750329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.750536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.750589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.750818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.750888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.751150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.751219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.751533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.751570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.751721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.751774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.751963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.752003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.752143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.752176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.752355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.752426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.752694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.752762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.753009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.753075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.753340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.753408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.753675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.753712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.753885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.753920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.754039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.754073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.754219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.754256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.754471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.754540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.754798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.754865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.755146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.755214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.755458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.755494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.755627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.755661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.755827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.755907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.756183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.756250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.756528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.756564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.756688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.756722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.756823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.756857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.757047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.757089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.749 [2024-10-08 18:33:56.757228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.749 [2024-10-08 18:33:56.757322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.749 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.757466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.757502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.757645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.757680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.757955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.758022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.758337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.758375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.758516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.758550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.758694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.758729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.758872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.758908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.759048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.759085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.759252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.759297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.759554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.759621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.759916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.759984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.760268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.760357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.760541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.760576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.760719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.760754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.760936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.761002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.761244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.761324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.761578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.761646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.761875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.761941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.762242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.762278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.762420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.762453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.762594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.762631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.762829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.762907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.763187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.763255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.763486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.763554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.763815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.763884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.764060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.764106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.764229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.764263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.764397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.764432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.764539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.764601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.764906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.764975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.765231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.765322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.765550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.765630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.765917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.765985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.766206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.766242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.766393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.766428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.766644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.766710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.766981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.767050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.750 qpair failed and we were unable to recover it. 00:26:38.750 [2024-10-08 18:33:56.767237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.750 [2024-10-08 18:33:56.767272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.767439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.767481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.767701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.767768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.768058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.768126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.768425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.768495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.768711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.768794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.768968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.769010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.769156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.769211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.769517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.769585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.769797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.769867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.770131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.770200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.770442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.770511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.770804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.770873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.771046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.771089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.771204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.771239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.771424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.771471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.771614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.771671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.771898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.771966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.772228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.772320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.772578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.772644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.772926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.772993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.773186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.773232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.773410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.773446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.773641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.773678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.773796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.773830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.774040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.774086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.774235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.774269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.774505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.774574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.774763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.774809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.774966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.775002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.775281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.775342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.751 [2024-10-08 18:33:56.775455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.751 [2024-10-08 18:33:56.775490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.751 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.775637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.775673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.775927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.775964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.776116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.776197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.776434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.776505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.776769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.776805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.776911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.776945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.777097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.777165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.777374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.777411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.777559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.777595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.777877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.777955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.778253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.778347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.778484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.778518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.778617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.778651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.778793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.778826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.778966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.778999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.779118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.779153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.779441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.779512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.779735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.779771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.779937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.779974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.780178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.780215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.780365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.780400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.780667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.780735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.781027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.781093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.781364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.781434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.781633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.781669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.781791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.781824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.782024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.782059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.782232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.782268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.782390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.782424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.782544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.782578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.782712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.782748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.782966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.783033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.783272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.783376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.783666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.783732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.783960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.783996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.784100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.784133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.784299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.784335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.784620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.784700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.752 [2024-10-08 18:33:56.784989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.752 [2024-10-08 18:33:56.785069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.752 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.785317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.785378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.785530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.785566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.785783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.785819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.785962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.785997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.786215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.786251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.786380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.786414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.786649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.786729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.786938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.787018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.787203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.787242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.787373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.787408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.787633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.787674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.787795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.787831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.788039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.788103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.788379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.788443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.788684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.788746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.788985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.789047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.789225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.789261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.789383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.789417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.789632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.789694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.789942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.790004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.790270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.790343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.790607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.790690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.790930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.790967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.791087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.791120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.791369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.791405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.791556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.791591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.791897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.791983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.792232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.792313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.792619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.792656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.792803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.792839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.793020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.793076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.793218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.793254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.793588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.793702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.794033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.794101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.794332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.794399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.794629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.794665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.794836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.794908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.795153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.795191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.795372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.795436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.753 qpair failed and we were unable to recover it. 00:26:38.753 [2024-10-08 18:33:56.795553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.753 [2024-10-08 18:33:56.795590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.795732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.795769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.795915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.795983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.796200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.796238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.796402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.796440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.796600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.796681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.796970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.797038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.797311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.797394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.797544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.797580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.797682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.797718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.797892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.797974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.798244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.798344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.798645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.798719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.799042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.799113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.799413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.799478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.799710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.799774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.799965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.800001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.800152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.800221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.800401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.800437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.800637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.800723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.800931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.800968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.801095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.801131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.801411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.801448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.801668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.801737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.802016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.802084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.802354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.802419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.802675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.802734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.802876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.802913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.803101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.803137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.803243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.803279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.803514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.803578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.803821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.803857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.804004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.804041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.804182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.804219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.804446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.804508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.804781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.804817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.804962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.804997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.805148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.805184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.805329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.805371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.805493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.805530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.754 qpair failed and we were unable to recover it. 00:26:38.754 [2024-10-08 18:33:56.805786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.754 [2024-10-08 18:33:56.805855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.806156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.806226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.806368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.806403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.806545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.806581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.806830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.806899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.807177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.807261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.807398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.807434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.807581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.807618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.807732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.807800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.808086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.808153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.808369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.808406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.808526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.808563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.808705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.808742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.808852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.808889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.809036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.809072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.809183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.809220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.809367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.809404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.809591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.809628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.809779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.809816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.809956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.809993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.810164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.810199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.810367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.810443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.810694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.810763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.810940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.810976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.811098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.811134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.811337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.811407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.811653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.811721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.812014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.812084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.812356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.812428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.812652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.812688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.812830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.812898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.813074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.813110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.813219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.813254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.813412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.813465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.813667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.813740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.813966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.814003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.755 [2024-10-08 18:33:56.814127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.755 [2024-10-08 18:33:56.814184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.755 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.814399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.814436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.814540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.814583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.814818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.814888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.815092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.815129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.815244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.815280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.815442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.815531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.815801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.815874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.816057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.816093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.816242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.816279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.816408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.816445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.816561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.816597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.816712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.816749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.816862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.816898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.817072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.817109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.817246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.817282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.817553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.817616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.817756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.817792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.817908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.817946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.818085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.818154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.818412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.818449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.818555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.818592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.818738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.818773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.818881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.818919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.819040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.819077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.819185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.819222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.819366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.819404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.819628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.819665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.819808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.819845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.819999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.820036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.820178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.820250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.820514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.820583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.820748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.820783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.820952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.820988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.821154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.821213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.821457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.821494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.821636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.821690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.821888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.821961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.822224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.822309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.822483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.822520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.756 [2024-10-08 18:33:56.822631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.756 [2024-10-08 18:33:56.822667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.756 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.822837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.822874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.823101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.823185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.823430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.823469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.823596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.823633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.823746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.823781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.823969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.824042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.824183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.824220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.824373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.824409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.824510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.824547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.824697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.824733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.825008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.825077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.825310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.825370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.825515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.825551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.825668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.825704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.825852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.825920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.826154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.826226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.826410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.826463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.826621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.826660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.826839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.826911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.827126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.827163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.827302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.827338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.827457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.827493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.827700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.827768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.827990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.828027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.828150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.828187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.828341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.828379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.828528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.828565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.828742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.828779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.828933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.828969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.829084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.829120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.829260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.829304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.829550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.829633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.829752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.829789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.829928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.829964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.830163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.830230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.830494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.830530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.830652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.830688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.830892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.830962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.831159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.831197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.757 [2024-10-08 18:33:56.831314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.757 [2024-10-08 18:33:56.831352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.757 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.831597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.831633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.831751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.831794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.831939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.831976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.832097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.832135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.832359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.832428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.832648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.832711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.832833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.832870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.832978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.833014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.833164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.833200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.833347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.833384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.833508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.833544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.833689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.833725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.833928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.833995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.834257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.834347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.834570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.834638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.834890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.834958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.835221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.835259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.835393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.835438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.835616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.835652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.835893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.835930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.836074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.836138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.836323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.836361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.836516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.836553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.836677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.836713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.836854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.836891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.837009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.837045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.837165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.837238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.837439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.837475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.837625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.837661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.837779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.837815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.837924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.837962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.838102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.838138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.838348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.838417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.838656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.838724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.839019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.839091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.839320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.839357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.839502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.839538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.839690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.839770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.840011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.840079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.758 qpair failed and we were unable to recover it. 00:26:38.758 [2024-10-08 18:33:56.840348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.758 [2024-10-08 18:33:56.840419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.840538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.840575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.840757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.840840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.840989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.841026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.841192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.841260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.841484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.841562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.841808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.841876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.842133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.842201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.842336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.842372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.842486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.842521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.842737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.842805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.843051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.843120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.843398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.843463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.843576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.843612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.843764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.843799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.843951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.844012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.844320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.844389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.844599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.844669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.844952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.845026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.845146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.845182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.845330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.845366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.845550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.845612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.845736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.845773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.846029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.846097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.846343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.846397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.846532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.846570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.846784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.846820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.846960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.846997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.847101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.847136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.847262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.847306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.847503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.847572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.847787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.847824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.847994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.848061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.848253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.848306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.848532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.848568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.848704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.848740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.848990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.849026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.849173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.849209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.849428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.849464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.759 [2024-10-08 18:33:56.849587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.759 [2024-10-08 18:33:56.849624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.759 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.849741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.849776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.850030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.850097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.850310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.850357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.850479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.850515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.850650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.850732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.851023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.851090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.851381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.851450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.851670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.851706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.851881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.851952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.852085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.852121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.852363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.852433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.852724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.852791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.853016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.853083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.853324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.853361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.853498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.853534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.853678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.853714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.853846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.853882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.853995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.854031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.854195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.854253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.854425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.854461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.854606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.854685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.854838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.854874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.854988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.855062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.855350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.855418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.855627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.855698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.855886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.855922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.856101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.856168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.856328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.856365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.856509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.856546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.856696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.856733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.856962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.857030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.857205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.857242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.857390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.857427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.857697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.857733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.857870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.760 [2024-10-08 18:33:56.857907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.760 qpair failed and we were unable to recover it. 00:26:38.760 [2024-10-08 18:33:56.858166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.858243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.858521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.858558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.858712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.858748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.858901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.858937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.859089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.859126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.859340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.859414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.859646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.859706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.859820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.859863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.860012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.860048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.860158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.860194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.860406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.860475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.860730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.860796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.861015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.861079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.861240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.861277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.861481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.861561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.861846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.861915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.862158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.862234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.862457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.862493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.862648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.862712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.862918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.862954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.863101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.863136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.863257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.863321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.863599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.863667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.863867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.863904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.864050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.864107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.864351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.864388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.864509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.864584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.864816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.864884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.865090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.865126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.865257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.865300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.865454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.865490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.865639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.865701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.865925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.865960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.866133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.866169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.866452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.866521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.866784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.866820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.866959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.866995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.867143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.867179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.867300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.867337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.867506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.761 [2024-10-08 18:33:56.867588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.761 qpair failed and we were unable to recover it. 00:26:38.761 [2024-10-08 18:33:56.867779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.867815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.867926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.867962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.868129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.868165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.868308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.868372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.868671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.868739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.869014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.869051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.869190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.869226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.869384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.869426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.869607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.869677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.869975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.870043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.870283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.870370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.870532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.870577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.870682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.870719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.870831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.870868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.871017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.871053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.871277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.871384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.871672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.871709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.871826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.871863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.872013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.872049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.872322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.872392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.872634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.872701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.872982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.873018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.873155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.873191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.873329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.873366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.873615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.873651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.873802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.873839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.873942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.873977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.874122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.874158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.874269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.874311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.874437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.874472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.874665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.874732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.874973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.875039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.875261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.875343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.875606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.875676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.875865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.875903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.876093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.876162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.876346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.876383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.876501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.876538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.876650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.876686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.876831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.762 [2024-10-08 18:33:56.876866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.762 qpair failed and we were unable to recover it. 00:26:38.762 [2024-10-08 18:33:56.877008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.877044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.877190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.877268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.877511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.877579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.877837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.877904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.878123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.878192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.878422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.878459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.878607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.878689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.878880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.878922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.879064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.879100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.879363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.879431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.879574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.879611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.879796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.879832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.879985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.880021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.880188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.880249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.880495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.880570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.880854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.880923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.881207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.881243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.881407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.881444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.881557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.881594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.881748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.881801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.882055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.882139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.882266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.882316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.882435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.882471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.882656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.882693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.882811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.882847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.883060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.883128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.883391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.883428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.883550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.883587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.883697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.883734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.883924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.883961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.884102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.884139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.884354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.884391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.884509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.884553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.884696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.884732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.884864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.884901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.885175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.885243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.885393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.885429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.885546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.885582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.885720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.885783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.886026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.763 [2024-10-08 18:33:56.886093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.763 qpair failed and we were unable to recover it. 00:26:38.763 [2024-10-08 18:33:56.886363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.886431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.886675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.886711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.886843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.886879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.887021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.887057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.887199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.887235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.887363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.887400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.887524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.887560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.887666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.887707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.887845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.887881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.887995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.888032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.888173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.888209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.888322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.888358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.888490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.888526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.888702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.888762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.888897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.888933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.889080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.889116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.889372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.889408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.889519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.889604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.889774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.889810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.889918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.889955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.890067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.890103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.890253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.890297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.890414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.890450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.890618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.890681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.890909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.890986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.891101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.891138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.891284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.891326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.891568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.891604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.891752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.891789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.892024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.892060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.892206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.892242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.892384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.892420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.892627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.892689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.892919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.892981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.893204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.893267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.893527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.764 [2024-10-08 18:33:56.893593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.764 qpair failed and we were unable to recover it. 00:26:38.764 [2024-10-08 18:33:56.893753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.893789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.893909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.893944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.894080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.894117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.894339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.894376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.894605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.894641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.894808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.894844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.895069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.895131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.895248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.895284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.895409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.895445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.895585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.895621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.895738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.895774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.895886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.895932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.896074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.896110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.896220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.896256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.896387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.896424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.896604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.896666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.896895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.896932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.897051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.897087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.897206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.897242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.897468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.897504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.897669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.897732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.897983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.898042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.898272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.898319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.898487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.898523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.898691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.898752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.898894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.898931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.899079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.899167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.899278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.899324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.899445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.899481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.899645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.899681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.899794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.899829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.900001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.900037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.900171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.900208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.900309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.900346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.900466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.900503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.900647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.900683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.900828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.900863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.901125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.901183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.901393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.901431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.901584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.765 [2024-10-08 18:33:56.901620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.765 qpair failed and we were unable to recover it. 00:26:38.765 [2024-10-08 18:33:56.901795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.901833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.902029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.902088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.902347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.902384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.902530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.902585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.902775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.902811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.902941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.902977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.903182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.903218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.903362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.903399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.903578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.903638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.903808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.903864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.904034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.904069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.904186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.904259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.904433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.904470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.904604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.904662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.904795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.904832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.904939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.904975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.905120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.905179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.905397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.905457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.905640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.905700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.905952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.906009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.906108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.906144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.906256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.906301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.906493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.906552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.906788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.906850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.906993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.907029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.907147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.907183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.907357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.907393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.907615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.907674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.907841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.907877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.908009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.908068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.908258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.908301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.908469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.908528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.908698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.908734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.908880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.908915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.909110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.909170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.909366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.909427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.909627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.909705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.909908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.909962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.910173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.910209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.910324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.766 [2024-10-08 18:33:56.910361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.766 qpair failed and we were unable to recover it. 00:26:38.766 [2024-10-08 18:33:56.910473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.910536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.910728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.910764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.910906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.910962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.911130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.911166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.911309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.911346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.911514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.911601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.911777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.911833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.912044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.912099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.912278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.912346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.912502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.912557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.912778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.912814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.912985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.913027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.913272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.913320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.913441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.913477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.913613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.913649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.913769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.913804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.913913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.913968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.914193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.914229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.914379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.914436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.914628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.914704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.914858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.914894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.915038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.915074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.915226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.915305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.915438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.915475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.915656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.915725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.915879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.915915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.916028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.916064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.916195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.916230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.916361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.916398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.916648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.916702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.916956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.917012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.917185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.917221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.917368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.917405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.917604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.917640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.917780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.917817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.917972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.918027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.918149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.918184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.918298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.918335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.918437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.918479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.918647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.767 [2024-10-08 18:33:56.918683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.767 qpair failed and we were unable to recover it. 00:26:38.767 [2024-10-08 18:33:56.918827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.918881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.919026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.919063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.919197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.919232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.919368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.919424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.919602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.919638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.919781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.919817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.919921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.919957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.920096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.920132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.920301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.920357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.920525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.920580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.920794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.920830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.920942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.920976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.921085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.921121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.921260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.921307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.921555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.921609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.921819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.921863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.921984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.922021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.922142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.922200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.922502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.922564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.922772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.922827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.922981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.923036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.923239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.923310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.923508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.923591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.923807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.923862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.924031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.924086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.924338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.924395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.924631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.924707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.924888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.924944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.925155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.925210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.925414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.925489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.925791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.925876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.926091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.926145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.926311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.926377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.926607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.926686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.926896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.926950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.768 qpair failed and we were unable to recover it. 00:26:38.768 [2024-10-08 18:33:56.927192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.768 [2024-10-08 18:33:56.927246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.927437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.927516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.927799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.927872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.928037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.928099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.928276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.928350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.928577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.928651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.928947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.929020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.929193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.929251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.929521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.929597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.929788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.929867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.930063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.930119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.930306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.930362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.930585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.930640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.930889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.930963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.931160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.931215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.931431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.931505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.931690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.931745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.931914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.931970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.932124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.932180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.932381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.932438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.932689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.932763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.932978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.933033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.933235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.933305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.933466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.933522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.933689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.933744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.933899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.933953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.934157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.934212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.934395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.934451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.934648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.934703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.934908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.934963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.935141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.935197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.935428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.935503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.935696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.935752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.935965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.936020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.936262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.936344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.936545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.936620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.936830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.936885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.937094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.937148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.937318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.937377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.937661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.769 [2024-10-08 18:33:56.937733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.769 qpair failed and we were unable to recover it. 00:26:38.769 [2024-10-08 18:33:56.937994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.938054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.938246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.938326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.938576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.938652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.938868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.938950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.939111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.939166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.939405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.939479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.939731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.939786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.939984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.940049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.940248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.940347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.940560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.940634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.940861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.940936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.941148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.941203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.941438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.941514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.941757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.941830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.942058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.942113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.942324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.942379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.942541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.942595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.942854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.942929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.943169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.943225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.943528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.943608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.943839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.943912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.944079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.944134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.944375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.944450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.944694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.944765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.944994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.945049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.945220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.945276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.945546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.945624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.945835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.945910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.946110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.946165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.946330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.946386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.946658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.946731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.946941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.946996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.947153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.947207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.947514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.947570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.947820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.947876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.948082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.948136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.948344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.948400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.948676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.948750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.948910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.948964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.949207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.770 [2024-10-08 18:33:56.949263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.770 qpair failed and we were unable to recover it. 00:26:38.770 [2024-10-08 18:33:56.949540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.949613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.949852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.949925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.950153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.950208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.950439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.950524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.950759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.950832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.951036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.951092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.951313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.951377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.951571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.951646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.951809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.951890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.952117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.952172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.952370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.952427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.952674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.952753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.952962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.953016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.953229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.953283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.953568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.953629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.953853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.953927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.954136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.954190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.954418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.954474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.954770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.954843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.955087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.955142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.955376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.955452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.955660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.955732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.955957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.956031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.956242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.956312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.956602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.956674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.956912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.956986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.957177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.957233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.957532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.957607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.957849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.957904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.958107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.958162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.958401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.958476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.958710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.958792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.959071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.959146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.959418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.959491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.959688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.959776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.960020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.960095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.960310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.960366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.960584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.960658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.960878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.960950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.771 [2024-10-08 18:33:56.961184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.771 [2024-10-08 18:33:56.961238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:38.771 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.961558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.961662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.961905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.961978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.962266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.962364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.962682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.962764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.963106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.963175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.963481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.963536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.963770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.963841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.964053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.964126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.964424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.964481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.964698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.964763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.965010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.965076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.965331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.965406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.965653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.965708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.965973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.966043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.966276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.966351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.966525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.966590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.966863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.966932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.967151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.967228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.967486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.967541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.967787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.967854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.968121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.968191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.968431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.968488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.968713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.968780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.969063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.969132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.969379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.969443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.969652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.969717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.969966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.970034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.970231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.970313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.970524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.970591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.970831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.970900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.971220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.971321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.971488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.971553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.971812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.971867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.972116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.972186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.972465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.972520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.972710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.972765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.972968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.973031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.973303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.973381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.973646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.973713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.973983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.772 [2024-10-08 18:33:56.974057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.772 qpair failed and we were unable to recover it. 00:26:38.772 [2024-10-08 18:33:56.974306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.974379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.974678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.974745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.975034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.975102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.975307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.975387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.975695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.975763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.975986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.976053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.976346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.976417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.976711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.976779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.977060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.977128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.977382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.977453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.977746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.977813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.978066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.978135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.978400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.978471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.978762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.978830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.979072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.979140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.979406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.979475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.979768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.979836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.980146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.980214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.980502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.980572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.980828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.980898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.981112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.981181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.981444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.981515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.981813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.981882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.982164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.982233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.982558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.982627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.982885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.982954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.983240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.983323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.983577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.983645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.983910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.983979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.984187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.984256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.984561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.984630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.984845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.984921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.985166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.985234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.985340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2004ff0 (9): Bad file descriptor 00:26:38.773 [2024-10-08 18:33:56.985693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.985801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.986087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.986164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.773 qpair failed and we were unable to recover it. 00:26:38.773 [2024-10-08 18:33:56.986450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.773 [2024-10-08 18:33:56.986524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.986818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.986887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.987157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.987224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.987510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.987579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.987833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.987901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.988169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.988237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.988516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.988583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.988878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.988946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.989212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.989308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.989549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.989617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.989861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.989929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.990182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.990249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.990522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.990588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.990834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.990902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.991188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.991257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.991522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.991598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.991832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.991900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.992152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.992220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.992506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.992569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.992842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.992905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.993123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.993189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.993451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.993515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.993726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.993790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.994028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.994091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.994274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.994353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.994567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.994629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.994839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.994901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.995134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.995196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.995445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.995508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.995783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.995845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.996067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.996129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.996339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.996404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.996619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.996683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.996919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.996982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.997223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.997285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.997568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.997631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.997839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.997902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.998110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.998173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.998389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.998453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.998691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.774 [2024-10-08 18:33:56.998753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.774 qpair failed and we were unable to recover it. 00:26:38.774 [2024-10-08 18:33:56.999020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:56.999082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:56.999283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:56.999358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:56.999598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:56.999661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:56.999878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:56.999941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.000206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.000268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.000491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.000556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.000793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.000857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.001096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.001158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.001391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.001456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.001697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.001761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.001952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.002015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.002246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.002327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.002582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.002646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.002851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.002913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.003152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.003215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.003500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.003563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.003757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.003819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.004082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.004145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.004372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.004435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.004609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.004673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.004885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.004947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.005148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.005210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.005412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.005505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.005760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.005851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.006182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.006253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.006523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.006588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.006873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.006938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.007134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.007196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.007427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.007492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.007732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.007795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.008096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.008187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.008517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.008591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.008823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.008888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.009125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.009192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.009482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.009554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.009853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.009922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.010214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.010335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.010654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.775 [2024-10-08 18:33:57.010753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:38.775 qpair failed and we were unable to recover it. 00:26:38.775 [2024-10-08 18:33:57.011055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.056 [2024-10-08 18:33:57.011121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.056 qpair failed and we were unable to recover it. 00:26:39.056 [2024-10-08 18:33:57.011323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.011388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.011614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.011676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.011905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.011968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.012234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.012312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.012503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.012565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.012781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.012847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.013087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.013149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.013386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.013452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.013646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.013710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.013912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.013974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.014175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.014249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.014533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.014597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.014784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.014847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.015032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.015100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.015348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.015412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.015600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.015669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.015886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.015949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.016128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.016189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.016380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.016444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.016630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.016693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.016921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.016985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.017215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.017279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.017589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.017652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.017895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.017958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.018237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.018315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.018611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.018679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.018926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.019006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.019258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.019342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.019591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.019653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.019895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.019956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.020182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.020244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.020482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.020545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.020829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.020892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.021120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.021183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.021505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.021575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.021820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.021888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.022114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.022183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.022462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.022531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.022846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.022916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.023207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.023269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.023502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.023564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.023828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.023890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.024127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.024209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.024465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.024529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.024702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.024765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.025024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.025087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.025366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.025430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.025664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.025727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.025993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.026056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.026301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.057 [2024-10-08 18:33:57.026364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.057 qpair failed and we were unable to recover it. 00:26:39.057 [2024-10-08 18:33:57.026549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.026611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.026853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.026917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.027167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.027229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.027517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.027581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.027847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.027911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.028134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.028197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.028483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.028548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.028774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.028836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.029074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.029137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.029353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.029419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.029670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.029734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.029968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.030031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.030230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.030308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.030547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.030610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.030793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.030854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.031053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.031118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.031354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.031418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.031646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.031709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.031923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.031985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.032232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.032306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.032538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.032601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.032868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.032930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.033206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.033267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.033541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.033604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.033830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.033893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.034159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.034220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.034447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.034511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.034795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.034858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.035118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.035196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.035494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.035566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.035852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.035913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.036171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.036240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.036483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.036558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.036796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.036858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.037110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.037177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.037459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.037524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.037862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.037924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.038116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.038178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.038431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.038467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.038574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.038608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.058 [2024-10-08 18:33:57.038725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.058 [2024-10-08 18:33:57.038757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.058 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.038869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.038904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.039053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.039085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.039217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.039249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.039376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.039409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.039526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.039564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.039726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.039759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.039895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.039927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.040044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.040076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.040224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.040257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.040394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.040425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.040555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.040585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.040686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.040718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.040824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.040855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.040991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.041022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.041129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.041165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.041265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.041305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.041444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.041475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.041596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.041626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.041761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.041791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.041900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.041931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.042064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.042095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.042226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.042257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.042381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.042412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.042539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.042569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.042680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.042711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.042849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.042879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.043044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.043069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.043159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.043184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.043310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.043343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.043448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.043474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.043568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.043595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.043681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.043707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.043816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.043841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.043928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.043954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.044040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.044067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.044150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.044176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.044265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.044300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.044414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.044439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.044529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.044558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.059 [2024-10-08 18:33:57.044662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.059 [2024-10-08 18:33:57.044688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.059 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.044798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.044824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.044936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.044967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.045084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.045110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.045200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.045225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.045316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.045342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.045423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.045448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.045567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.045592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.045677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.045702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.045821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.045846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.045923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.045948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.046028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.046054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.046140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.046166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.046317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.046356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.046452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.046477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.046590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.046618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.046732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.046762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.046890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.046920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.047008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.047037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.047171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.047200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.047366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.047393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.047473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.047499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.047618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.047648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.047789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.047814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.047902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.047928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.048059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.048112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.048314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.048341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.048423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.048447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.048534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.048558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.048700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.048731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.048862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.048893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.048989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.049019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.049119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.049149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.049280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.049331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.049420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.049445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.049536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.049561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.049700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.049726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.049992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.050042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.050209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.050246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.050391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.050416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.060 qpair failed and we were unable to recover it. 00:26:39.060 [2024-10-08 18:33:57.050531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.060 [2024-10-08 18:33:57.050579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.050709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.050747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.050897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.050933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.051093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.051124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.051222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.051253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.051394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.051424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.051502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.051529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.051675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.051727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.051832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.051861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.051950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.051978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.052067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.052094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.052256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.052300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.052449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.052477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.052590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.052625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.052747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.052775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.052870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.052916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.053017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.053050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.053173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.053205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.053352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.053380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.053467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.053495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.053619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.053653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.053783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.053816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.053924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.053957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.054139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.054173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.054271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.054320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.054453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.054482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.054595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.054623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.054712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.054739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.054850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.054876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.054964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.054990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.055083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.055116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.055208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.055235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.055362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.055393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.055495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.055526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.055611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.055640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.055794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.055833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.055950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.061 [2024-10-08 18:33:57.055989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.061 qpair failed and we were unable to recover it. 00:26:39.061 [2024-10-08 18:33:57.056131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.056163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.056276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.056334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.056434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.056466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.056597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.056628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.056748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.056779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.056913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.056952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.057115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.057167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.057356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.057389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.057485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.057516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.057706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.057738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.057904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.057944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.058052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.058090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.058272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.058313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.058471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.058504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.058661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.058700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.058865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.058905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.059066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.059106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.059232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.059295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.059424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.059455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.059550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.059598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.059717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.059754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.059892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.059931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.060117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.060156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.060296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.060339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.060471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.060502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.060616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.060646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.060793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.060845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.060978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.061017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.061171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.061204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.061356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.061388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.061493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.061525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.061637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.061669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.061801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.061842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.062004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.062052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.062208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.062247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.062375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.062408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.062512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.062556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.062705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.062739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.062835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.062867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.062999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.063039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.062 [2024-10-08 18:33:57.063203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.062 [2024-10-08 18:33:57.063234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.062 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.063371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.063403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.063514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.063545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.063789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.063828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.063993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.064032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.064211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.064244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.064391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.064425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.064609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.064664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.064861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.064913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.065042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.065086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.065215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.065265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.065425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.065470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.065606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.065642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.065789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.065843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.066035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.066078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.066249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.066283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.066415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.066446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.066578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.066611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.066755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.066797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.066951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.067002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.067116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.067166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.067308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.067347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.067473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.067505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.067675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.067707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.067870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.067909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.068053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.068098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.068213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.068262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.068400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.068432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.068601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.068657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.068763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.068796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.068922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.068971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.069088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.069120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.069208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.069238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.069364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.069397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.069491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.069522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.069657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.069690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.069793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.069824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.069933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.069966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.070059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.070108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.070260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.070298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.070399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.063 [2024-10-08 18:33:57.070431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.063 qpair failed and we were unable to recover it. 00:26:39.063 [2024-10-08 18:33:57.070603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.070655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.070781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.070820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.070969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.071008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.071133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.071166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.071337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.071370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.071466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.071498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.071618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.071673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.071847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.071896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.072008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.072040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.072217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.072249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.072377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.072409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.072540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.072572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.072726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.072767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.072883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.072934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.073049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.073098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.073228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.073260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.073434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.073466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.073604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.073637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.073769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.073808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.073952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.073991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.074145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.074177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.074303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.074342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.074429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.074460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.074585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.074624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.074845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.074884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.075041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.075080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.075201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.075240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.075414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.075447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.075550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.075586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.075694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.075725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.075848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.075888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.076070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.076109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.076269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.076311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.076436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.076468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.076583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.076620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.076754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.076793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.076907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.076957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.077134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.077173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.077326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.077359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.077464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.077495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.077623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.064 [2024-10-08 18:33:57.077662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.064 qpair failed and we were unable to recover it. 00:26:39.064 [2024-10-08 18:33:57.077780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.077819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.077973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.078005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.078103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.078157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.078279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.078318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.078413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.078444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.078543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.078599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.078720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.078772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.078937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.078976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.079135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.079175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.079319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.079354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.079516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.079570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.079710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.079743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.079909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.079949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.080069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.080108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.080264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.080341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.080459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.080491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.080629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.080667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.080780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.080820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.081010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.081049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.081241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.081272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.081399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.081435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.081545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.081595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.081745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.081797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.081953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.082000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.082129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.082169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.082359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.082392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.082502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.082545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.082668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.082716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.082824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.082858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.082956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.082988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.083088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.083122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.083245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.083277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.083424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.083480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.083655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.083689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.083832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.065 [2024-10-08 18:33:57.083865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.065 qpair failed and we were unable to recover it. 00:26:39.065 [2024-10-08 18:33:57.084000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.084039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.084221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.084269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.084419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.084451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.084606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.084639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.084826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.084859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.084983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.085017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.085195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.085243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.085364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.085395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.085507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.085555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.085711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.085748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.085860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.085898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.086070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.086115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.086242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.086281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.086445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.086476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.086631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.086681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.086828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.086862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.087000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.087033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.087193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.087233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.087422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.087455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.087614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.087662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.087815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.087864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.088019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.088053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.088220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.088252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.088373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.088406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.088549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.088582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.088722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.088756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.088904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.088944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.089069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.089101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.089227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.089259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.089386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.089418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.089548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.089595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.089717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.089758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.089892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.089944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.090143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.090176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.090310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.090353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.090498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.090531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.090720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.090757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.090913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.090950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.091068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.091107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.091220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.091251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.066 [2024-10-08 18:33:57.091408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.066 [2024-10-08 18:33:57.091440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.066 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.091536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.091586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.091732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.091781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.091957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.091990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.092140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.092189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.092355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.092388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.092514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.092556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.092690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.092730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.092964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.093004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.093156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.093188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.093351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.093383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.093490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.093522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.093625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.093656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.093778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.093822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.094002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.094035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.094158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.094189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.094298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.094338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.094466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.094498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.094621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.094676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.094834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.094873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.095030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.095063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.095176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.095208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.095340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.095372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.095499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.095531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.095659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.095691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.095813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.095852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.095993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.096032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.096146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.096185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.096307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.096363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.096473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.096504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.096606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.096655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.096795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.096833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.096986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.097034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.097147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.097180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.097355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.097387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.097516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.097556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.097695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.097728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.097838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.097871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.098001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.098038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.098174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.098205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.098314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.098357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.067 qpair failed and we were unable to recover it. 00:26:39.067 [2024-10-08 18:33:57.098491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.067 [2024-10-08 18:33:57.098524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.098646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.098694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.098826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.098863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.098973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.099010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.099173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.099205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.099343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.099375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.099505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.099537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.099689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.099725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.099859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.099911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.100056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.100093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.100240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.100277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.100447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.100480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.100575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.100606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.100698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.100729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.100850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.100886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.101016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.101051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.101195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.101230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.101426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.101459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.101588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.101619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.101774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.101810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.101972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.102004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.102147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.102179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.102304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.102336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.102467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.102498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.102633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.102668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.102819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.102855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.102987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.103028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.103182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.103214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.103324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.103373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.103510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.103542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.103676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.103711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.103869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.103901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.103998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.104032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.104163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.104196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.104328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.104360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.104461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.104493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.104629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.104663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.104791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.104840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.104976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.105012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.105107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.105140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.105304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.068 [2024-10-08 18:33:57.105357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.068 qpair failed and we were unable to recover it. 00:26:39.068 [2024-10-08 18:33:57.105471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.105504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.105619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.105651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.105836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.105868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.106000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.106032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.106192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.106224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.106349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.106382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.106489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.106521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.106627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.106658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.106833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.106867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.107014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.107046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.107201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.107235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.107362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.107394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.107487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.107519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.107628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.107660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.107768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.107800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.107916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.107947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.108071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.108119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.108248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.108280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.108392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.108423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.108553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.108601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.108704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.108737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.108861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.108894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.109026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.109059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.109233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.109265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.109379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.109409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.109550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.109599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.109717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.109750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.109878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.109911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.110052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.110100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.110225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.110259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.110420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.110451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.110576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.110608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.110717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.110749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.110857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.110888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.111040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.111072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.111176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.111215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.111337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.111370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.069 qpair failed and we were unable to recover it. 00:26:39.069 [2024-10-08 18:33:57.111478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.069 [2024-10-08 18:33:57.111510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.111609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.111641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.111751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.111782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.111917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.111950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.112087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.112120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.112225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.112257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.112381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.112412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.112533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.112564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.112652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.112681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.112804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.112835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.112963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.112993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.113145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.113176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.113306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.113338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.113481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.113511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.113606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.113635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.113729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.113759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.113882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.113918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.114048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.114079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.114203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.114234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.114363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.114395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.114492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.114523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.114623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.114653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.114773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.114804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.114908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.114939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.115034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.115065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.115152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.115187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.115357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.115387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.115510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.115540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.115657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.115686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.115816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.115846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.115977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.116007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.116132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.116161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.116311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.116341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.116438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.116466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.116559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.116588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.116709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.116739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.116856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.116886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.117031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.117060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.117148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.117176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.117267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.117316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.117438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.070 [2024-10-08 18:33:57.117467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.070 qpair failed and we were unable to recover it. 00:26:39.070 [2024-10-08 18:33:57.117563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.117592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.117716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.117745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.117865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.117899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.118018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.118048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.118195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.118224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.118322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.118352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.118441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.118471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.118559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.118587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.118686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.118729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.118894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.118927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.119026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.119059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.119181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.119211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.119327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.119358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.119452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.119481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.119567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.119596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.119721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.119751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.119877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.119907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.120011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.120040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.120128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.120159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.120254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.120283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.120394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.120422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.120537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.120566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.120659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.120687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.120838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.120867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.120948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.120975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.121068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.121096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.121212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.121241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.121368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.121399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.121496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.121525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.121650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.121684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.121792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.121821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.121918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.121947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.122035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.122063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.122149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.122179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.122315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.122345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.122466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.122496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.122625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.122653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.122743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.122772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.122866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.122894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.122982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.123010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.071 [2024-10-08 18:33:57.123105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.071 [2024-10-08 18:33:57.123133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.071 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.123252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.123281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.123403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.123431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.123523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.123552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.123667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.123695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.123810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.123838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.123933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.123961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.124079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.124107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.124226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.124255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.124380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.124409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.124527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.124555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.124670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.124698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.124788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.124816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.124900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.124926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.125014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.125043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.125131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.125158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.125272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.125319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.125416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.125445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.125535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.125563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.125652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.125678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.125768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.125797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.125938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.125966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.126065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.126093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.126211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.126239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.126329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.126356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.126448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.126476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.126604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.126631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.126742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.126769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.126880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.126907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.127027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.127056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.127182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.127210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.127327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.127356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.127445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.127478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.127566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.127592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.127729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.127757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.127847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.127874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.127952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.127979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.128066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.128094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.128180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.128208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.128293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.128319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.128434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.072 [2024-10-08 18:33:57.128461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.072 qpair failed and we were unable to recover it. 00:26:39.072 [2024-10-08 18:33:57.128553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.128580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.128662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.128690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.128774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.128806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.128898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.128927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.129020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.129048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.129158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.129186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.129306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.129335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.129422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.129450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.129543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.129571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.129648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.129674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.129774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.129802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.129886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.129912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.130021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.130048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.130162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.130190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.130276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.130310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.130406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.130434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.130560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.130588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.130672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.130699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.130790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.130819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.130895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.130921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.131036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.131064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.131180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.131207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.131299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.131325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.131436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.131464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.131554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.131582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.131693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.131720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.131807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.131833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.131916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.131944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.132047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.132075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.132168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.132196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.132296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.132324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.132408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.132436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.132525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.132553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.132673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.132702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.132788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.132817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.132915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.132942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.133026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.133057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.073 qpair failed and we were unable to recover it. 00:26:39.073 [2024-10-08 18:33:57.133176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.073 [2024-10-08 18:33:57.133203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.133312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.133340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.133454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.133483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.133576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.133603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.133694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.133721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.133805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.133838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.133923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.133951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.134055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.134082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.134179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.134207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.134322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.134350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.134442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.134470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.134585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.134613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.134697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.134724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.134823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.134851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.134935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.134964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.135052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.135079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.135194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.135222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.135313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.135340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.135464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.135492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.135599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.135627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.135712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.135740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.135825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.135852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.135948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.135975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.136064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.136093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.136210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.136239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.136338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.136367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.136445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.136472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.136565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.136593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.136680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.136708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.136821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.136849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.136936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.136964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.137062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.137091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.137232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.137265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.137358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.137386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.137470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.137498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.137590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.137618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.137728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.137756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.137838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.074 [2024-10-08 18:33:57.137866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.074 qpair failed and we were unable to recover it. 00:26:39.074 [2024-10-08 18:33:57.137958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.137987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.138071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.138098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.138209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.138236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.138352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.138381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.138477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.138505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.138615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.138642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.138762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.138791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.138931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.138959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.139082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.139110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.139249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.139277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.139400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.139428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.139516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.139545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.139654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.139682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.139852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.139880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.139996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.140024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.140123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.140153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.140266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.140303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.140424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.140452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.140538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.140566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.140681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.140709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.140784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.140811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.140888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.140924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.141018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.141046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.141133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.141161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.141239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.141266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.141360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.141388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.141481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.141509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.141597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.141625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.141715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.141743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.141849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.141877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.141988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.142016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.142128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.142156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.142268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.142308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.142452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.142479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.142596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.142624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.142786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.142814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.142952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.142980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.143071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.143099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.143216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.143244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.143343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.143372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.075 qpair failed and we were unable to recover it. 00:26:39.075 [2024-10-08 18:33:57.143457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.075 [2024-10-08 18:33:57.143485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.143607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.143635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.143747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.143775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.143855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.143883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.144025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.144053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.144150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.144191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.144339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.144370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.144471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.144500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.144594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.144622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.144703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.144729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.144844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.144871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.145019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.145047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.145162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.145190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.145294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.145323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.145440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.145467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.145578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.145606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.145691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.145719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.145833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.145860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.145953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.145980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.146067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.146095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.146228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.146269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.146405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.146442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.146528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.146557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.146650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.146678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.146794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.146823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.146940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.146967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.147074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.147102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.147218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.147246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.147379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.147407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.147491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.147519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.147631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.147659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.147786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.147814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.147909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.147938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.148057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.148086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.148197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.148225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.148348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.148377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.148465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.148494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.148609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.148638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.148753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.148781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.148875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.148904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.076 [2024-10-08 18:33:57.148993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.076 [2024-10-08 18:33:57.149021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.076 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.149165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.149193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.149316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.149345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.149435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.149463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.149585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.149613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.149701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.149729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.149839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.149868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.149991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.150019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.150113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.150142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.150262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.150308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.150425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.150453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.150540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.150568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.150680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.150708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.150820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.150848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.150961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.150990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.151115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.151143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.151259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.151293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.151402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.151430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.151522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.151549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.151635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.151663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.151771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.151799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.151914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.151947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.152062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.152090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.152172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.152201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.152339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.152368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.152504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.152532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.152621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.152649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.152793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.152821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.152955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.152984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.153131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.153159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.153249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.153277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.153401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.153429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.153524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.153551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.153634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.153662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.153768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.153796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.153915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.153942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.154083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.154111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.154215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.154243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.154332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.154361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.154475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.077 [2024-10-08 18:33:57.154504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.077 qpair failed and we were unable to recover it. 00:26:39.077 [2024-10-08 18:33:57.154592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.154621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.154767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.154795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.154903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.154931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.155039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.155066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.155166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.155194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.155291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.155320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.155443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.155471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.155600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.155629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.155718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.155747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.155847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.155875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.155973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.156001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.156077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.156105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.156219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.156248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.156402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.156430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.156506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.156534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.156647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.156674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.156818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.156846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.156961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.156989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.157104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.157132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.157253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.157282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.157406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.157433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.157544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.157572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.157688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.157717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.157803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.157831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.157936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.157963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.158074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.158102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.158209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.158237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.158377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.158406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.158521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.158550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.158665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.158693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.158771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.158797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.158892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.158920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.159034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.159062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.159191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.159232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.159376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.159417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.159517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.159547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.078 [2024-10-08 18:33:57.159670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.078 [2024-10-08 18:33:57.159699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.078 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.159812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.159841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.159942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.159970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.160088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.160116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.160239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.160281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.160409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.160438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.160537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.160566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.160687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.160716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.160805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.160832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.160934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.160964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.161086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.161115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.161200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.161228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.161314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.161347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.161461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.161489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.161572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.161601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.161716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.161745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.161868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.161896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.162006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.162034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.162162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.162190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.162316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.162345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.162485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.162513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.162649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.162677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.162770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.162798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.162890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.162918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.163031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.163058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.163176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.163205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.163351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.163380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.163494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.163522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.163604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.163636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.163725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.163753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.163843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.163872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.163984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.164019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.164174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.164204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.164302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.164331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.164456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.164485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.164628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.164657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.164805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.164835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.164926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.164954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.165073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.165101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.165190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.165219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.165361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.079 [2024-10-08 18:33:57.165390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.079 qpair failed and we were unable to recover it. 00:26:39.079 [2024-10-08 18:33:57.165478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.165505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.165650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.165679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.165792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.165823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.165949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.165977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.166098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.166126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.166246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.166274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.166430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.166458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.166548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.166576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.166701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.166729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.166810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.166837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.166929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.166958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.167086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.167114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.167215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.167243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.167371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.167401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.167495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.167524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.167666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.167694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.167814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.167841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.167928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.167956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.168052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.168079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.168193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.168222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.168335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.168364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.168458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.168486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.168579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.168607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.168699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.168727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.168806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.168833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.168988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.169017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.169097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.169124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.169277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.169311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.169422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.169451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.169572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.169600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.169712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.169740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.169854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.169881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.169972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.170000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.170088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.170116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.170218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.170247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.170342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.170371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.170478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.170506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.170645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.170672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.170817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.170846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.170990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.171018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.080 qpair failed and we were unable to recover it. 00:26:39.080 [2024-10-08 18:33:57.171136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.080 [2024-10-08 18:33:57.171164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.171283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.171319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.171444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.171472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.171586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.171613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.171734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.171762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.171913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.171941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.172057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.172084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.172175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.172203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.172294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.172322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.172432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.172460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.172562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.172590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.172678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.172706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.172823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.172852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.172962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.172998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.173120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.173150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.173247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.173277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.173371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.173399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.173517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.173546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.173632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.173660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.173775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.173803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.173927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.173955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.174069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.174098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.174210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.174238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.174315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.174342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.174461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.174489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.174572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.174606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.174728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.174756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.174902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.174930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.175042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.175070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.175183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.175211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.175354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.175383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.175503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.175532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.175619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.175646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.175776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.175804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.175890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.175917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.176055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.176083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.176225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.176253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.176376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.176410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.176501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.176527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.176665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.176694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.081 [2024-10-08 18:33:57.176807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.081 [2024-10-08 18:33:57.176835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.081 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.176918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.176945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.177062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.177099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.177200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.177229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.177322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.177350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.177467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.177495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.177586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.177614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.177728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.177756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.177875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.177903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.178010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.178039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.178181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.178208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.178301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.178328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.178411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.178444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.178535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.178563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.178707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.178734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.178849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.178876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.178965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.178992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.179104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.179132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.179245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.179274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.179403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.179431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.179520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.179548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.179626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.179652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.179765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.179792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.179876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.179905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.179994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.180022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.180106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.180133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.180281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.180316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.180462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.180489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.180605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.180632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.180749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.180777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.180901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.180929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.181072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.181100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.181219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.181247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.181397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.181425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.181540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.181568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.181684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.181713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.082 [2024-10-08 18:33:57.181806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.082 [2024-10-08 18:33:57.181834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.082 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.181949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.181976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.182096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.182124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.182242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.182275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.182398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.182426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.182520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.182548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.182684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.182712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.182853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.182880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.182996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.183024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.183141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.183170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.183261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.183299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.183397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.183433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.183558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.183587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.183711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.183740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.183865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.183894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.184011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.184040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.184157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.184186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.184312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.184342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.184492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.184520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.184639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.184666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.184776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.184803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.184946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.184974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.185112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.185140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.185281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.185316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.185436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.185464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.185606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.185633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.185745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.185772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.185896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.185924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.186006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.186033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.186163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.186190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.186307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.186340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.186466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.186493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.186608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.186636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.186732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.186760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.186871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.186899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.187015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.187042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.187131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.187159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.187279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.187315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.187428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.187456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.187595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.187623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.187740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.083 [2024-10-08 18:33:57.187767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.083 qpair failed and we were unable to recover it. 00:26:39.083 [2024-10-08 18:33:57.187912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.187940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.188058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.188086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.188227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.188254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.188357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.188385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.188469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.188497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.188638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.188666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.188755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.188783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.188862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.188889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.188973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.189001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.189079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.189107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.189194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.189222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.189359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.189388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.189493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.189521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.189630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.189658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.189743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.189770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.189858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.189886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.190025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.190056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.190141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.190168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.190300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.190340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.190458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.190488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.190570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.190598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.190681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.190709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.190828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.190856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.190970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.190998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.191109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.191137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.191233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.191261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.191408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.191437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.191583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.191612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.191753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.191781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.191872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.191900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.192046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.192074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.192180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.192208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.192295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.192323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.192440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.192469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.192589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.192617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.192729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.192757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.192869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.192897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.192982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.193010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.193093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.193121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.193231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.193260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.084 [2024-10-08 18:33:57.193386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.084 [2024-10-08 18:33:57.193414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.084 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.193493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.193520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.193612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.193640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.193727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.193759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.193879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.193907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.193993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.194021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.194137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.194166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.194272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.194314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.194398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.194425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.194515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.194543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.194652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.194680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.194756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.194783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.194876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.194903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.195021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.195049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.195141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.195168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.195257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.195285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.195416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.195444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.195541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.195568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.195675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.195703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.195783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.195811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.195933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.195961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.196101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.196129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.196223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.196251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.196348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.196376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.196471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.196499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.196591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.196620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.196735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.196762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.196902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.196930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.197019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.197048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.197134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.197163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.197306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.197335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.197422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.197451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.197556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.197583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.197695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.197723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.197827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.197855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.197980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.198009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.198145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.198173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.198262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.198299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.198386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.198415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.198525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.198553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.198640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.085 [2024-10-08 18:33:57.198669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.085 qpair failed and we were unable to recover it. 00:26:39.085 [2024-10-08 18:33:57.198784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.198813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.198932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.198960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.199053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.199080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.199223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.199251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.199349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.199378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.199515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.199543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.199631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.199659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.199743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.199770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.199863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.199890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.200030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.200058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.200169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.200196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.200313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.200342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.200446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.200487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.200603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.200633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.200719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.200748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.200888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.200916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.201050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.201095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.201218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.201255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.201362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.201391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.201506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.201533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.201675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.201703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.201783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.201811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.201918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.201955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.202072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.202100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.202216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.202244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.202378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.202408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.202549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.202578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.202698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.202725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.202811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.202839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.202948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.202977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.203133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.203161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.203256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.203285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.203374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.203401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.203515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.203543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.203657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.203685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.203798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.203825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.086 qpair failed and we were unable to recover it. 00:26:39.086 [2024-10-08 18:33:57.203914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.086 [2024-10-08 18:33:57.203941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.204057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.204085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.204204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.204232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.204352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.204380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.204475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.204504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.204652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.204681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.204776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.204804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.204895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.204924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.205067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.205095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.205180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.205208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.205351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.205380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.205492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.205520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.205637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.205665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.205775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.205804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.205948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.205976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.206089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.206117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.206256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.206285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.206376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.206404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.206518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.206547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.206652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.206680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.206823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.206856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.206978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.207007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.207129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.207157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.207301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.207330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.207446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.207473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.207636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.207677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.207821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.207850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.207990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.208018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.208155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.208183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.208308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.208337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.208420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.208448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.208543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.208572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.208713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.208741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.208860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.208889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.208980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.209010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.209127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.209155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.209273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.209309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.209428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.209456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.209572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.209600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.209741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.209769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.087 qpair failed and we were unable to recover it. 00:26:39.087 [2024-10-08 18:33:57.209890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.087 [2024-10-08 18:33:57.209919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.210045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.210073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.210191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.210219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.210312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.210340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.210433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.210461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.210576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.210604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.210718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.210747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.210865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.210898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.211019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.211047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.211166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.211194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.211293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.211322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.211410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.211438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.211548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.211576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.211714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.211742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.211832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.211860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.211975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.212003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.212096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.212124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.212241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.212269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.212388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.212416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.212530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.212558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.212697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.212725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.212874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.212902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.213024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.213052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.213189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.213217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.213332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.213361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.213456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.213484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.213578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.213606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.213719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.213748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.213866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.213894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.213985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.214014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.214122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.214150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.214256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.214282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.214440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.214467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.214552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.214580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.214718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.214750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.214844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.214872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.215018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.215047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.215164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.215192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.215282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.215317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.215463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.215491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.088 [2024-10-08 18:33:57.215569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.088 [2024-10-08 18:33:57.215598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.088 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.215690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.215718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.215807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.215835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.215925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.215953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.216059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.216086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.216165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.216191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.216306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.216334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.216416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.216443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.216591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.216620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.216727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.216755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.216849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.216877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.216961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.216990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.217133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.217162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.217298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.217326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.217444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.217472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.217565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.217594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.217711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.217738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.217829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.217858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.217949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.217976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.218056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.218084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.218199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.218226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.218334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.218367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.218508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.218536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.218729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.218756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.218899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.218926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.219039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.219067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.219185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.219226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.219365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.219396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.219513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.219542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.219657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.219686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.219765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.219792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.219915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.219944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.220057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.220086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.220179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.220207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.220322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.220351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.220474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.220502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.220593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.220620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.220737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.220766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.220894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.220923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.221041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.221069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.089 [2024-10-08 18:33:57.221164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.089 [2024-10-08 18:33:57.221192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.089 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.221282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.221316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.221428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.221455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.221540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.221568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.221685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.221714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.221853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.221881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.221995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.222023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.222138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.222166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.222258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.222293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.222417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.222445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.222558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.222585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.222703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.222730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.222846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.222873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.223071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.223099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.223187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.223214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.223327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.223356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.223464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.223491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.223573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.223601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.223679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.223707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.223847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.223874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.223986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.224014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.224121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.224149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.224245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.224275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.224386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.224414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.224504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.224532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.224674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.224702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.224818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.224849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.224969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.224996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.225091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.225120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.225261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.225295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.225380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.225408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.225497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.225525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.225607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.225634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.225748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.225776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.225930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.225960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.226057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.226084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.226205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.226233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.226321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.226350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.226462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.226490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.226575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.226604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.226721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.090 [2024-10-08 18:33:57.226749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.090 qpair failed and we were unable to recover it. 00:26:39.090 [2024-10-08 18:33:57.226860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.226888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.226980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.227008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.227150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.227179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.227269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.227307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.227423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.227451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.227540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.227568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.227653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.227681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.227818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.227846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.227960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.227989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.228071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.228099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.228187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.228215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.228331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.228360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.228504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.228532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.228647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.228675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.228792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.228820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.228939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.228966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.229046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.229074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.229188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.229216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.229327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.229355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.229444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.229472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.229556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.229583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.229672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.229700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.229809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.229837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.229949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.229978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.230063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.230090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.230202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.230230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.230309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.230336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.230427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.230456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.230572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.230600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.230709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.230737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.230823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.230850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.230964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.230991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.231107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.231135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.231245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.231273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.231404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.231437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.231558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.231586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.091 [2024-10-08 18:33:57.231727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.091 [2024-10-08 18:33:57.231754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.091 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.231894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.231921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.232008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.232037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.232153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.232182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.232266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.232300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.232387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.232415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.232555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.232583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.232698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.232726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.232864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.232893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.232989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.233017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.233162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.233190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.233277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.233311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.233453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.233480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.233617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.233645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.233759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.233786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.233902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.233930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.234014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.234041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.234155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.234184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.234306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.234335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.234414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.234442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.234526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.234553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.234646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.234674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.234789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.234817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.234926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.234955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.235062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.235089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.235201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.235234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.235316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.235343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.235479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.235506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.235621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.235649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.235790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.235818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.235934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.235962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.236108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.236136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.236248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.236277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.236381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.236409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.236522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.236550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.236690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.236719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.236839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.236867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.236956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.236984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.237101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.237130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.237274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.237312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.092 [2024-10-08 18:33:57.237451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.092 [2024-10-08 18:33:57.237479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.092 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.237591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.237618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.237736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.237763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.237849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.237876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.237969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.237997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.238136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.238164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.238280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.238316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.238453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.238482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.238560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.238588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.238702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.238730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.238821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.238850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.238946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.238974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.239086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.239118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.239229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.239256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.239403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.239431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.239541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.239569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.239654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.239682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.239801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.239829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.239943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.239970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.240064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.240092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.240233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.240261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.240359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.240387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.240528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.240570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.240695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.240725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.240866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.240895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.241007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.241035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.241151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.241179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.241266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.241300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.241417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.241444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.241561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.241590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.241666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.241693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.241786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.241815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.241933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.241960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.242043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.242071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.242187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.242215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.242305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.242333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.242448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.242476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.242602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.242631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.242748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.242777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.242890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.242923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.243015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.093 [2024-10-08 18:33:57.243042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.093 qpair failed and we were unable to recover it. 00:26:39.093 [2024-10-08 18:33:57.243157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.243185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.243272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.243308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.243418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.243446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.243549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.243577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.243698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.243726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.243839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.243869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.243985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.244013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.244146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.244174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.244305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.244333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.244475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.244503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.244618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.244646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.244766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.244793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.244886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.244913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.245000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.245028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.245138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.245166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.245274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.245309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.245392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.245420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.245517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.245544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.245662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.245691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.245784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.245812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.245926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.245954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.246036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.246064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.246173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.246201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.246294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.246323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.246446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.246474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.246604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.246632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.246766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.246801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.246913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.246940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.247026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.247054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.247147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.247176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.247258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.247284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.247382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.247410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.247524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.247553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.247693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.247720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.247803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.247831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.247974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.248002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.248145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.248173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.248265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.248301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.248389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.248417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.248505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.248532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.094 qpair failed and we were unable to recover it. 00:26:39.094 [2024-10-08 18:33:57.248647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.094 [2024-10-08 18:33:57.248674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.248764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.248792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.248889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.248917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.249058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.249086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.249209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.249238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.249331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.249358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.249450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.249477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.249589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.249617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.249706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.249734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.249818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.249845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.249939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.249968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.250060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.250087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.250205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.250249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.250386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.250415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.250516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.250545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.250640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.250668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.250812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.250841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.250932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.250961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.251048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.251076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.251198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.251235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.251325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.251352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.251445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.251472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.251565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.251593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.251732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.251760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.251839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.251871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.251955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.251988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.252203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.252244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.252380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.252409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.252498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.252526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.252635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.252663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.252755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.252783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.252927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.252954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.253059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.253088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.253170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.253198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.253297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.253325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.253402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.253428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.253519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.095 [2024-10-08 18:33:57.253547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.095 qpair failed and we were unable to recover it. 00:26:39.095 [2024-10-08 18:33:57.253626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.253652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.253745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.253775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.253861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.253889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.253974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.254000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.254091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.254118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.254202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.254229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.254323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.254350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.254443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.254471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.254582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.254610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.254705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.254732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.254828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.254857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.254952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.254980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.255061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.255092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.255179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.255207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.255305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.255333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.255448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.255481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.255565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.255593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.255708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.255736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.255829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.255857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.255974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.256002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.256117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.256145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.256260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.256295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.256392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.256420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.256557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.256584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.256678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.256705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.256797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.256826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.256942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.256969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.257052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.257079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.257194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.257223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.257352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.257381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.257474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.257501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.257584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.257610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.257727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.257754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.257867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.257894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.258006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.258036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.258156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.258185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.258278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.258313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.258404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.258432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.258516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.258543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.258626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.258654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.096 qpair failed and we were unable to recover it. 00:26:39.096 [2024-10-08 18:33:57.258740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.096 [2024-10-08 18:33:57.258768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.258882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.258910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.259019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.259051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.259148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.259176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.259267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.259301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.259413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.259441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.259638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.259667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.259784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.259812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.259899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.259926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.260042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.260070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.260185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.260212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.260296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.260323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.260462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.260490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.260616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.260644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.260756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.260783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.260876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.260905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.260996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.261024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.261137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.261165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.261248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.261274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.261368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.261396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.261510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.261538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.261657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.261686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.261770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.261797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.261892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.261919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.261996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.262024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.262101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.262127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.262237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.262264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.262354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.262382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.262479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.262507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.262614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.262655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.262749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.262777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.262887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.262914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.262997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.263023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.263136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.263163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.263278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.263312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.263423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.263450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.263529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.263555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.263667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.263694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.263792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.263821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.263928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.263955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.097 [2024-10-08 18:33:57.264043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.097 [2024-10-08 18:33:57.264071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.097 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.264149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.264177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.264267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.264309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.264410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.264438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.264520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.264549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.264640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.264668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.264752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.264779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.264869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.264896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.264974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.265000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.265085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.265112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.265252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.265282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.265387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.265415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.265509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.265537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.265618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.265646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.265735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.265763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.265871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.265898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.266014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.266043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.266143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.266183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.266278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.266315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.266407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.266435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.266546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.266574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.266690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.266718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.266799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.266826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.266940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.266968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.267058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.267085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.267198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.267225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.267308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.267335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.267431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.267458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.267572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.267599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.267685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.267713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.267793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.267820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.267909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.267936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.268049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.268090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.268214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.268244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.268347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.268388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.268484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.268514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.268635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.268663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.268770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.268798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.268891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.268920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.269003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.269028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.098 [2024-10-08 18:33:57.269122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.098 [2024-10-08 18:33:57.269151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.098 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.269229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.269255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.269357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.269384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.269485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.269512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.269625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.269650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.269766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.269792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.269986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.270013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.270131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.270158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.270274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.270307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.270387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.270414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.270503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.270530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.270644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.270671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.270760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.270786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.270903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.270931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.271022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.271050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.271166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.271193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.271308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.271341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.271423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.271450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.271546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.271576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.271693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.271721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.271819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.271846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.271960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.271987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.272082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.272110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.272249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.272276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.272364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.272391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.272477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.272504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.272592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.272620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.272698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.272726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.272805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.272834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.272951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.272980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.273078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.273107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.273224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.273252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.273350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.273379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.273468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.273496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.273581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.273608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.273696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.273724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.273812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.273841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.273928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.273957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.274044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.274071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.274179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.274207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.099 qpair failed and we were unable to recover it. 00:26:39.099 [2024-10-08 18:33:57.274313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.099 [2024-10-08 18:33:57.274341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.274459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.274488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.274573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.274602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.274695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.274723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.274812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.274840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.274925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.274953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.275042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.275070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.275155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.275184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.275276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.275311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.275406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.275448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.275566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.275596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.275685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.275714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.275829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.275856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.275936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.275964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.276054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.276082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.276164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.276192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.276279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.276314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.276399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.276426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.276517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.276545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.276624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.276650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.276744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.276774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.276877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.276905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.276996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.277024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.277110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.277137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.277225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.277252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.277370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.277398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.277494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.277521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.277668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.277695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.277787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.277815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.277935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.277963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.278074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.278115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.100 qpair failed and we were unable to recover it. 00:26:39.100 [2024-10-08 18:33:57.278238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.100 [2024-10-08 18:33:57.278267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.278414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.278443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.278538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.278566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.278679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.278708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.278796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.278825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.278918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.278946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.279038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.279066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.279173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.279200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.279305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.279346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.279445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.279474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.279563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.279591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.279707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.279735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.279848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.279881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.279977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.280006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.280098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.280127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.280226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.280253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.280350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.280378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.280465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.280494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.280613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.280640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.280730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.280757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.280893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.280920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.281034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.281061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.281145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.281174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.281297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.281326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.281449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.281477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.281559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.281587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.281680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.281708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.281794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.281821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.281928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.281957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.282072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.282100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.282183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.282210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.282305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.282334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.282419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.282448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.282529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.282557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.282646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.282674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.282756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.282784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.282867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.282895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.283000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.283027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.283114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.283142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.283247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.101 [2024-10-08 18:33:57.283274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.101 qpair failed and we were unable to recover it. 00:26:39.101 [2024-10-08 18:33:57.283387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.283416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.283502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.283529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.283612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.283641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.283754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.283782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.283897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.283925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.284016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.284044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.284172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.284201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.284306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.284346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.284453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.284484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.284569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.284598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.284711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.284739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.284854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.284882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.284990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.285018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.285131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.285159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.285252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.285279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.285383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.285411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.285499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.285527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.285638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.285666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.285773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.285801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.285917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.285945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.286030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.286057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.286163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.286193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.286303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.286344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.286442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.286470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.286586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.286613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.286708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.286735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.286859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.286887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.287029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.287056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.287170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.287197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.287281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.287314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.287407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.287436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.287527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.287555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.287646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.287674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.287757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.287785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.287892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.287920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.288045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.288072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.288191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.288220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.288342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.288371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.288490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.288518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.288668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.288701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.102 qpair failed and we were unable to recover it. 00:26:39.102 [2024-10-08 18:33:57.288788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.102 [2024-10-08 18:33:57.288815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.288935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.288963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.289055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.289083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.289172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.289199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.289296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.289334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.289423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.289450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.289568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.289596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.289714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.289745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.289859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.289886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.289973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.290000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.290109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.290137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.290224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.290252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.290347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.290373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.290457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.290484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.290618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.290646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.290739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.290767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.290850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.290878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.291021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.291048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.291161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.291189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.291276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.291318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.291408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.291434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.291527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.291557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.291648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.291675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.291785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.291812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.291899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.291926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.292008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.292036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.292142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.292169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.292257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.292285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.292380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.292406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.292524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.292551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.292635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.292663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.292754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.292781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.292923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.292950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.293037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.293065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.293158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.293187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.293275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.293310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.293419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.293446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.293559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.293586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.293667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.293694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.293808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.103 [2024-10-08 18:33:57.293840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.103 qpair failed and we were unable to recover it. 00:26:39.103 [2024-10-08 18:33:57.293955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.293982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.294112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.294154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.294283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.294319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.294436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.294465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.294559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.294586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.294700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.294728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.294810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.294838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.294925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.294954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.295074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.295102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.295217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.295244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.295348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.295376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.295457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.295485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.295580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.295607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.295705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.295732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.295843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.295870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.295985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.296012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.296127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.296155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.296280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.296329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.296529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.296558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.296654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.296682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.296801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.296828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.296911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.296938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.297026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.297053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.297142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.297170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.297362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.297391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.297504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.297532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.297620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.297649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.297767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.297794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.297884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.297911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.297991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.298018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.298112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.298140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.298245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.298285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.298384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.298413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.298554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.298582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.298700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.104 [2024-10-08 18:33:57.298728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.104 qpair failed and we were unable to recover it. 00:26:39.104 [2024-10-08 18:33:57.298817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.298845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.298932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.298960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.299055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.299083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.299221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.299262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.299378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.299413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.299555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.299583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.299704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.299732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.299832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.299860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.299948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.299976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.300112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.300153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.300251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.300280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.300406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.300435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.300539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.300567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.300683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.300710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.300819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.300846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.300966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.300994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.301106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.301134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.301282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.301334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.301464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.301493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.301584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.301612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.301704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.301731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.301847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.301875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.301960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.301988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.302103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.302132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.302238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.302279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.302388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.302419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.302518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.302547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.302682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.302710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.302828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.302856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.302977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.303006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.303103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.303131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.303213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.303247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.303342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.303371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.303462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.303490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.303584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.303611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.303697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.303725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.303806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.303834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.303912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.303939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.304056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.304084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.304183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.105 [2024-10-08 18:33:57.304224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.105 qpair failed and we were unable to recover it. 00:26:39.105 [2024-10-08 18:33:57.304321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.304351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.304443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.304472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.304559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.304587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.304701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.304728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.304836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.304864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.304958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.304987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.305102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.305130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.305243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.305272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.305366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.305394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.305474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.305502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.305620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.305648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.305738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.305767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.305853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.305881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.305997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.306025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.306109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.306136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.306220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.306248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.306365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.306393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.306478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.306505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.306600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.306628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.306713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.306742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.306825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.306853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.306970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.306998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.307095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.307136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.307234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.307263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.307407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.307452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.307587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.307615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.307699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.307727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.307863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.307891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.308007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.308035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.308117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.308145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.308230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.308259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.308353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.308386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.308470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.308498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.308615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.308644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.308730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.308758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.308903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.308931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.309024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.309052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.309160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.309187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.309269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.309302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.309390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.309419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.106 qpair failed and we were unable to recover it. 00:26:39.106 [2024-10-08 18:33:57.309509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.106 [2024-10-08 18:33:57.309537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.309654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.309682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.309782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.309810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.309921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.309948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.310029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.310057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.310174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.310202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.310298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.310326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.310411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.310439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.310533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.310560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.310642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.310669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.310748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.310775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.310893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.310921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.311006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.311034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.311157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.311197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.311322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.311351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.311439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.311467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.311552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.311581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.311661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.311688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.311778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.311806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.311919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.311948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.312047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.312078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.312166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.312195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.312280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.312315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.312431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.312459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.312551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.312578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.312658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.312686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.312782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.312809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.312905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.312932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.313014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.313041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.313136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.313164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.313277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.313313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.313431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.313467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.313556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.313584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.313670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.313698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.313786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.313814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.313903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.313931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.314024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.314051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.314141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.314170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.314282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.314318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.314417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.314445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.107 qpair failed and we were unable to recover it. 00:26:39.107 [2024-10-08 18:33:57.314553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.107 [2024-10-08 18:33:57.314581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.314669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.314697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.314783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.314811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.314896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.314923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.315010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.315039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.315135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.315163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.315253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.315281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.315383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.315409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.315496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.315524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.315605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.315633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.315751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.315779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.315866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.315895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.315989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.316018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.316138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.316165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.316252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.316279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.316399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.316428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.316525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.316554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.316672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.316700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.316828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.316862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.316981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.317008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.317100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.317127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.317215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.317247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.317362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.317391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.317486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.317522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.317687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.317727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.317836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.317864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.317980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.318008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.318103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.318131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.318245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.318272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.318372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.318400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.318486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.318513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.318632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.318660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.318812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.108 [2024-10-08 18:33:57.318840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.108 qpair failed and we were unable to recover it. 00:26:39.108 [2024-10-08 18:33:57.318935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.318964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.319044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.319071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.319165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.319192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.319312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.319352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.319467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.319509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.319631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.319660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.319742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.319769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.319886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.319914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.320008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.320036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.320128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.320155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.320275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.320315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.320425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.320452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.320582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.320623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.320735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.320777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.109 [2024-10-08 18:33:57.320897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.109 [2024-10-08 18:33:57.320928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.109 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.321025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.321053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.321145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.321173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.321267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.321302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.321416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.321444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.321535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.321564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.321663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.321690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.321784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.321814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.321952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.321980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.322072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.322100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.322190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.322218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.322306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.322335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.322429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.322463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.322550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.322578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.322697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.322725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.322839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.322867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.322949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.322977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.323097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.323124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.323218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.390 [2024-10-08 18:33:57.323245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.390 qpair failed and we were unable to recover it. 00:26:39.390 [2024-10-08 18:33:57.323343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.323371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.323462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.323489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.323597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.323625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.323708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.323736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.323828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.323856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.323947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.323975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.324075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.324103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.324248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.324275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.324371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.324398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.324473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.324499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.324584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.324611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.324697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.324725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.324817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.324845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.324931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.324961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.325076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.325103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.325185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.325213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.325330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.325359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.325499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.325529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.325640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.325667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.325790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.325823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.325937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.325966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.326052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.326080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.326189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.326216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.326338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.326366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.326464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.326494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.326574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.326602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.326724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.326751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.326842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.326875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.326974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.327002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.327116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.327144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.327228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.327257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.327352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.327380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.327473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.327501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.327600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.327631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.327746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.327774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.391 [2024-10-08 18:33:57.327893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.391 [2024-10-08 18:33:57.327920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.391 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.328034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.328061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.328171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.328199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.328308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.328336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.328428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.328456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.328541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.328569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.328707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.328734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.328858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.328886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.328998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.329026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.329164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.329191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.329301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.329329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.329427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.329455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.329571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.329599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.329687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.329715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.329856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.329883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.329978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.330005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.330099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.330126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.330242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.330270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.330367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.330394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.330503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.330531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.330617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.330644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.330729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.330756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.330850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.330878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.330981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.331008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.331125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.331157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.331247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.331274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.331370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.331397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.331551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.331579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.331671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.331699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.331836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.331863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.331981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.332009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.332094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.332122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.332262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.332297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.332385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.392 [2024-10-08 18:33:57.332412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.392 qpair failed and we were unable to recover it. 00:26:39.392 [2024-10-08 18:33:57.332495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.332523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.332641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.332669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.332782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.332809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.332886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.332914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.333032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.333059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.333149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.333176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.333302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.333331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.333449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.333476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.333590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.333617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.333705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.333733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.333817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.333845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.333948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.333975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.334093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.334120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.334235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.334263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.334361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.334389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.334483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.334510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.334598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.334626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.334770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.334798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.334899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.334926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.335008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.335036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.335120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.335147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.335240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.335267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.335403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.335431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.335537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.335565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.335660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.335687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.335801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.335828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.335922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.335949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.336040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.336068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.336190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.336218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.336333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.336362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.336477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.336509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.336643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.336684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.336784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.336814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.336911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.336941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.337056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.337084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.337202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.337231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.337330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.337358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.337474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.337503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.393 [2024-10-08 18:33:57.337619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.393 [2024-10-08 18:33:57.337648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.393 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.337768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.337796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.337883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.337910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.337992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.338020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.338138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.338166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.338249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.338277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.338373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.338401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.338495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.338522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.338634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.338662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.338770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.338798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.338874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.338901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.338993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.339022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.339143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.339170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.339260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.339294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.339382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.339410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.339527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.339554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.339668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.339695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.339810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.339838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.339949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.339976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.340069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.340097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.340178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.340205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.340321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.340349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.340442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.340469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.340555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.340582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.340701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.340728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.340811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.340838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.340926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.340955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.341041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.341068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.341152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.341179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.341300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.341330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.341422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.341450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.341544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.341572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.341664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.341700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.341797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.341826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.341916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.341945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.342039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.342068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.342163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.342191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.342304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.342332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.342444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.342472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.342559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.394 [2024-10-08 18:33:57.342587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.394 qpair failed and we were unable to recover it. 00:26:39.394 [2024-10-08 18:33:57.342705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.342733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.342824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.342851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.342942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.342970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.343076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.343103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.343192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.343219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.343358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.343386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.343498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.343525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.343613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.343641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.343754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.343782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.343904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.343931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.344021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.344048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.344140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.344168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.344272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.344308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.344412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.344452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.344550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.344579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.344664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.344692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.344781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.344809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.344887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.344914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.345050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.345078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.345171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.345199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.345300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.345329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.345420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.345449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.345537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.345566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.345675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.345703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.345814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.345842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.345929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.345957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.346075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.346102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.346182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.346209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.346329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.346359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.346450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.346477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.346576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.346603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.346715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.346742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.346829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.346861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.346953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.346981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.395 [2024-10-08 18:33:57.347070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.395 [2024-10-08 18:33:57.347097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.395 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.347175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.347202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.347323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.347350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.347462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.347489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.347570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.347598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.347677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.347705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.347804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.347834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.347970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.347998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.348107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.348135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.348226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.348254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.348377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.348405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.348536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.348564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.348649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.348677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.348793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.348820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.348902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.348929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.349063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.349090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.349211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.349238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.349334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.349362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.349477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.349504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.349619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.349646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.349729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.349757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.349875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.349903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.350020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.350048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.350135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.350162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.350281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.350317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.350406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.350434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.350519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.350547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.350635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.350662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.350758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.350786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.350901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.350929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.351012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.351042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.351149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.351177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.351285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.351328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.351443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.351470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.351553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.351581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.351697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.351724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.351865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.351894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.352013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.352041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.352134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.352167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.352282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.352319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.352407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.396 [2024-10-08 18:33:57.352434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.396 qpair failed and we were unable to recover it. 00:26:39.396 [2024-10-08 18:33:57.352553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.352581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.352673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.352700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.352790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.352818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.352935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.352962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.353079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.353108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.353224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.353251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.353350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.353379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.353460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.353488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.353625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.353652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.353767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.353794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.353880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.353908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.353996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.354024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.354102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.354130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.354214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.354244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.354359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.354387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.354475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.354503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.354586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.354613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.354699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.354727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.354864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.354891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.355005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.355033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.355177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.355205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.355296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.355324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.355466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.355494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.355609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.355637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.355789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.355817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.355898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.355926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.356040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.356075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.356175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.356201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.356302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.356343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.356443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.356472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.356592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.356620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.356710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.356737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.356856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.356884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.357008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.357036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.357122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.357151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.357245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.357273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.357394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.357422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.357509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.357541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.357632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.397 [2024-10-08 18:33:57.357660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.397 qpair failed and we were unable to recover it. 00:26:39.397 [2024-10-08 18:33:57.357756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.357783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.357897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.357925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.358040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.358068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.358192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.358219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.358339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.358369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.358457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.358485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.358571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.358599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.358693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.358720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.358837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.358865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.358978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.359006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.359098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.359125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.359240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.359268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.359387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.359415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.359510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.359538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.359625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.359652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.359767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.359795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.359913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.359943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.360056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.360083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.360194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.360222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.360314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.360341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.360464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.360492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.360595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.360622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.360714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.360741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.360824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.360851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.360939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.360967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.361086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.361115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.361254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.361281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.361377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.361405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.361500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.361527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.361609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.361636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.361724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.361751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.361873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.361902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.361993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.362022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.362109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.362138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.362227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.362255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.362346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.362374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.362467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.362495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.362582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.362611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.362700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.362733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.362824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.398 [2024-10-08 18:33:57.362852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.398 qpair failed and we were unable to recover it. 00:26:39.398 [2024-10-08 18:33:57.362971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.362998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.363078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.363105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.363187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.363214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.363310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.363339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.363457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.363484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.363584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.363612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.363696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.363723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.363833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.363861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.363976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.364003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.364090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.364119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.364234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.364262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.364369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.364397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.364485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.364512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.364599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.364627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.364713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.364741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.364836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.364864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.364983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.365011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.365101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.365128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.365242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.365270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.365371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.365400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.365493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.365521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.365635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.365663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.365756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.365784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.365923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.365951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.366062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.366089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.366209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.366236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.366327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.366356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.366440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.366467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.366594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.366622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.366708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.366736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.366823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.366851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.366966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.366994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.367093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.367120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.367205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.367234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.399 [2024-10-08 18:33:57.367348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.399 [2024-10-08 18:33:57.367376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.399 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.367486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.367513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.367604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.367632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.367776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.367804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.367912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.367944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.368024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.368052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.368167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.368194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.368308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.368336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.368425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.368453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.368542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.368569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.368663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.368691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.368817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.368846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.368963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.368991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.369113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.369140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.369227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.369254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.369345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.369374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.369464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.369492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.369579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.369607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.369730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.369757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.369851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.369878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.369994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.370020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.370136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.370163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.370277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.370315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.370400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.370427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.370540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.370567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.370644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.370671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.370772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.370799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.370890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.370918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.371031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.371059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.371183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.371212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.371309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.371338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.371434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.371462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.371555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.371582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.371723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.371751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.371833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.371860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.371974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.372001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.372086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.372114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.372226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.372253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.372355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.400 [2024-10-08 18:33:57.372383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.400 qpair failed and we were unable to recover it. 00:26:39.400 [2024-10-08 18:33:57.372496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.372523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.372616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.372644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.372732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.372761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.372848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.372875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.372970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.372998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.373078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.373111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.373210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.373237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.373335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.373363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.373473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.373500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.373588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.373616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.373700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.373727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.373809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.373836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.373923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.373950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.374061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.374088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.374194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.374223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.374312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.374340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.374455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.374482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.374578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.374606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.374694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.374721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.374812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.374840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.374926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.374954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.375076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.375103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.375191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.375219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.375337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.375366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.375453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.375481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.375598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.375626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.375717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.375746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.375830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.375857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.375969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.375996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.376104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.376132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.376224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.376252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.376358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.376386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.376498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.376538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.376689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.376718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.376813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.376841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.376933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.376960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.377045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.377072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.377160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.377188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.377301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.377329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.401 [2024-10-08 18:33:57.377418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.401 [2024-10-08 18:33:57.377447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.401 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.377529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.377556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.377635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.377663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.377753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.377780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.377898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.377926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.378001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.378028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.378115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.378149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.378260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.378295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.378377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.378404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.378494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.378522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.378605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.378632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.378717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.378744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.378843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.378870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.378978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.379006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.379112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.379140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.379224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.379251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.379352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.379380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.379465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.379492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.379586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.379615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.379732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.379759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.379851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.379879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.379992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.380019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.380100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.380128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.380236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.380276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.380409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.380439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.380553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.380582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.380665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.380693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.380804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.380831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.380925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.380952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.381067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.381096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.381218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.381247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.381348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.381377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.381464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.381491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.381613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.381641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.381756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.381783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.381867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.381894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.381972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.381999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.382114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.382141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.382220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.382247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.382363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.382391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.402 qpair failed and we were unable to recover it. 00:26:39.402 [2024-10-08 18:33:57.382484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.402 [2024-10-08 18:33:57.382511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.382609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.382639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.382730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.382758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.382841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.382869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.382961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.382988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.383072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.383100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.383215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.383243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.383362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.383391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.383477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.383504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.383592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.383620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.383736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.383763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.383851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.383879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.383969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.383998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.384095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.384125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.384221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.384249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.384383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.384412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.384524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.384552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.384639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.384667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.384753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.384781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.384897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.384924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.385046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.385074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.385190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.385218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.385337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.385367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.385481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.385509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.385592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.385620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.385759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.385786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.385878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.385905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.385992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.386019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.386109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.386138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.386226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.386254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.386345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.386373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.386458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.386487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.386603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.386631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.386748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.386780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.386860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.386888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.386980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.403 [2024-10-08 18:33:57.387008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.403 qpair failed and we were unable to recover it. 00:26:39.403 [2024-10-08 18:33:57.387104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.387132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.387215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.387242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.387347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.387375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.387460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.387487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.387580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.387608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.387700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.387727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.387842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.387870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.387956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.387985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.388067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.388095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.388239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.388267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.388367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.388395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.388488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.388516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.388602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.388630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.388744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.388773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.388856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.388883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.388981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.389009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.389128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.389155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.389245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.389272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.389398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.389425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.389515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.389542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.389657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.389685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.389778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.389804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.389920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.389949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.390064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.390092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.390183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.390212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.390307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.390336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.390423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.390450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.390557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.390585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.390670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.390698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.390812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.390839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.390957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.390985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.391081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.391110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.391193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.391220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.391304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.391332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.391421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.391448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.391529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.391556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.391644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.391671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.391789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.391822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.391955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.404 [2024-10-08 18:33:57.391983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.404 qpair failed and we were unable to recover it. 00:26:39.404 [2024-10-08 18:33:57.392079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.392106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.392222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.392249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.392345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.392373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.392465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.392493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.392620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.392649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.392728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.392755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.392863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.392890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.392996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.393023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.393132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.393159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.393250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.393277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.393370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.393398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.393485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.393512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.393611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.393639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.393752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.393779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.393865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.393893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.394000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.394028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.394113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.394142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.394232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.394258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.394384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.394411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.394505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.394532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.394645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.394672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.394785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.394812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.394898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.394925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.395036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.395063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.395171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.395199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.395295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.395323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.395439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.395467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.395577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.395618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.395721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.395751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.395868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.395896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.395976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.396003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.396097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.396125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.396222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.396249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.396353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.396382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.396493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.396521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.396645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.396673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.396762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.396790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.396907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.396934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.397048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.397080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.405 [2024-10-08 18:33:57.397201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.405 [2024-10-08 18:33:57.397229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.405 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.397313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.397341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.397427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.397454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.397569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.397597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.397685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.397713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.397831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.397863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.397948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.397975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.398061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.398089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.398181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.398208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.398353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.398381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.398467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.398497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.398588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.398622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.398734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.398762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.398858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.398886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.399000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.399030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.399145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.399172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.399261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.399294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.399402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.399430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.399513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.399540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.399629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.399656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.399763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.399791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.399904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.399931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.400042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.400069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.400184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.400212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.400308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.400336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.400419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.400446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.400566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.400593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.400673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.400701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.400787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.400814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.400927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.400954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.401079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.401117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.401226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.401267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.401371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.401401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.401493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.401521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.401641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.401669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.401757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.401784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.401904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.401933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.402026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.402056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.402171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.402200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.402314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.406 [2024-10-08 18:33:57.402347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.406 qpair failed and we were unable to recover it. 00:26:39.406 [2024-10-08 18:33:57.402435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.402463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.402583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.402611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.402703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.402730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.402824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.402853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.402931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.402958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.403043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.403072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.403156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.403183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.403264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.403296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.403391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.403418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.403505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.403533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.403624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.403652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.403738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.403766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.403894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.403923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.404031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.404059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.404146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.404174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.404263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.404306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.404404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.404432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.404542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.404571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.404682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.404709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.404824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.404852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.404994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.405023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.405156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.405197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.405303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.405334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.405446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.405473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.405565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.405592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.405709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.405736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.405856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.405890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.405983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.406012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.406133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.406160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.406273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.406309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.406424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.406452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.406539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.406568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.406656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.406684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.406778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.406806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.406916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.406944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.407039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.407068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.407155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.407184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.407270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.407303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.407420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.407447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.407 [2024-10-08 18:33:57.407538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.407 [2024-10-08 18:33:57.407565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.407 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.407684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.407712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.407794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.407822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.407906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.407935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.408016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.408044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.408161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.408188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.408316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.408345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.408431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.408459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.408566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.408595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.408710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.408738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.408827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.408854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.408977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.409005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.409111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.409152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.409280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.409317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.409409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.409437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.409529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.409557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.409649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.409676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.409789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.409817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.409912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.409941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.410020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.410048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.410180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.410221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.410320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.410349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.410437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.410464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.410545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.410573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.410688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.410716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.410792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.410820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.410926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.410953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.411042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.411075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.411157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.411184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.411298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.411327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.411422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.411462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.411588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.411617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.411734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.411762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.411881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.411908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.412006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.412034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.412124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.412152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.412233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.408 [2024-10-08 18:33:57.412261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.408 qpair failed and we were unable to recover it. 00:26:39.408 [2024-10-08 18:33:57.412357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.412386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.412465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.412493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.412580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.412609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.412754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.412782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.412888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.412930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.413022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.413050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.413140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.413168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.413252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.413280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.413371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.413398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.413517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.413545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.413627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.413656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.413743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.413771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.413908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.413935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.414026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.414053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.414144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.414171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.414282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.414316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.414407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.414434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.414529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.414561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.414653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.414680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.414769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.414798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.414937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.414965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.415164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.415192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.415281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.415316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.415405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.415433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.415521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.415550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.415665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.415693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.415811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.415839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.415925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.415953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.416089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.416116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.416243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.416284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.416391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.416420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.416550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.416579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.416670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.416700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.416793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.416821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.416909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.416937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.417031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.417060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.417170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.417198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.417311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.417340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.417455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.409 [2024-10-08 18:33:57.417482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.409 qpair failed and we were unable to recover it. 00:26:39.409 [2024-10-08 18:33:57.417565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.417592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.417681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.417708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.417846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.417873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.417960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.417987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.418075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.418108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.418194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.418223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.418314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.418342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.418458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.418485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.418570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.418598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.418710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.418738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.418832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.418860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.419004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.419033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.419145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.419172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.419258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.419294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.419382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.419409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.419503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.419531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.419640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.419668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.419755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.419783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.419868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.419895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.419992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.420020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.420149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.420177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.420265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.420299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.420412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.420439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.420529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.420557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.420673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.420701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.420816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.420843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.420955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.420982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.421094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.421122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.421212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.421242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.421337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.421366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.421454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.421482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.421593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.421620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.421713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.421740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.421827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.421854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.421934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.421963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.422057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.422086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.422181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.422209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.422298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.422326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.422409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.422436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.422530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.410 [2024-10-08 18:33:57.422558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.410 qpair failed and we were unable to recover it. 00:26:39.410 [2024-10-08 18:33:57.422640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.422668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.422750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.422777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.422888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.422916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.423004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.423033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.423127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.423168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.423267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.423313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.423399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.423427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.423526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.423554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.423670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.423697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.423780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.423808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.423923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.423950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.424027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.424055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.424170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.424199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.424300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.424329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.424415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.424443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.424533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.424560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.424669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.424696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.424809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.424836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.424918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.424947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.425042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.425071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.425211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.425239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.425329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.425357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.425440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.425467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.425553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.425580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.425655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.425682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.425772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.425800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.425885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.425913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.426033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.426061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.426145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.426173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.426294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.426322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.426462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.426489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.426580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.426608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.426722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.426757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.426847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.426876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.426964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.426992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.427074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.427102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.427218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.427245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.427350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.427380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.427500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.427528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.427606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.427633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.411 qpair failed and we were unable to recover it. 00:26:39.411 [2024-10-08 18:33:57.427750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.411 [2024-10-08 18:33:57.427777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.427867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.427894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.428010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.428037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.428124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.428152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.428270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.428307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.428393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.428421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.428538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.428565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.428645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.428672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.428765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.428792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.428883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.428911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.429011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.429039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.429142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.429183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.429305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.429334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.429423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.429450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.429531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.429559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.429649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.429677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.429756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.429783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.429899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.429926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.430014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.430042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.430160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.430187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.430281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.430313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.430400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.430428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.430510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.430538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.430626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.430654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.430764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.430791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.430911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.430938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.431028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.431055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.431141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.431168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.431252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.431279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.431401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.431428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.431508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.431535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.431624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.431651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.431741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.431773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.431867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.431894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.432007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.432035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.412 [2024-10-08 18:33:57.432149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.412 [2024-10-08 18:33:57.432176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.412 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.432255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.432283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.432406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.432433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.432520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.432547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.432632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.432660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.432746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.432775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.432874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.432914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.433010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.433039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.433124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.433152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.433235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.433264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.433401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.433441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.433547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.433577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.433694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.433723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.433803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.433831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.433945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.433973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.434091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.434118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.434240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.434269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.434372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.434400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.434486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.434514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.434636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.434663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.434751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.434779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.434858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.434885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.434965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.434993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.435077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.435104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.435218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.435259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.435366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.435396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.435516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.435544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.435637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.435665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.435747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.435775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.435892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.435919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.436035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.436064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.436158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.436187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.436296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.436325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.436440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.436468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.436560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.436588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.436686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.436714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.436827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.436855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.436946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.436979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.437062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.437089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.437206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.413 [2024-10-08 18:33:57.437234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.413 qpair failed and we were unable to recover it. 00:26:39.413 [2024-10-08 18:33:57.437338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.437378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.437470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.437500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.437630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.437658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.437745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.437774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.437883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.437910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.438027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.438054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.438132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.438160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.438249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.438276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.438371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.438401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.438490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.438518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.438605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.438633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.438753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.438782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.438900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.438929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.439015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.439043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.439181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.439209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.439326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.439354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.439470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.439498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.439582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.439610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.439695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.439725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.439811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.439838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.439949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.439977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.440091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.440119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.440260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.440295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.440411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.440441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.440554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.440588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.440723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.440751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.440888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.440915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.441023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.441051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.441127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.441155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.441280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.441313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.441401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.441429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.441520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.441547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.441661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.441689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.441771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.441799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.441932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.441960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.442065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.442106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.442192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.442221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.442337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.442365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.442489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.442517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.442628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.414 [2024-10-08 18:33:57.442656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.414 qpair failed and we were unable to recover it. 00:26:39.414 [2024-10-08 18:33:57.442770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.442798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.442912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.442940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.443048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.443078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.443226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.443254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.443353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.443383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.443498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.443526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.443641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.443669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.443748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.443776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.443866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.443895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.443986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.444014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.444164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.444192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.444312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.444340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.444425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.444453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.444543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.444571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.444653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.444680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.444766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.444793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.444931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.444959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.445067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.445095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.445213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.445241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.445330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.445359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.445484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.445525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.445669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.445698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.445817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.445845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.445961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.445988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.446076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.446109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.446261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.446296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.446413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.446441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.446533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.446560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.446667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.446694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.446774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.446802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.446909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.446937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.447049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.447077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.447186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.447213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.447354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.447382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.447465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.447494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.447584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.447613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.447730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.447757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.447842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.447870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.448015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.448043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.448159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.415 [2024-10-08 18:33:57.448187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.415 qpair failed and we were unable to recover it. 00:26:39.415 [2024-10-08 18:33:57.448270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.448316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.448443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.448472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.448608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.448636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.448720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.448747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.448839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.448867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.449008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.449036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.449153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.449181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.449271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.449311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.449429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.449457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.449550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.449578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.449665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.449692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.449802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.449835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.449960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.449990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.450107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.450135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.450213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.450241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.450334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.450362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.450447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.450474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.450584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.450619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.450734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.450762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.450876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.450904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.451044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.451071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.451214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.451241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.451349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.451390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.451487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.451516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.451625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.451653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.451767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.451795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.451883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.451911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.452001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.452029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.452139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.452167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.452283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.452322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.452403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.452430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.452540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.452568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.452683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.452710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.452824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.452851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.452927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.452954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.453061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.453089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.453232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.453261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.453384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.453413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.416 [2024-10-08 18:33:57.453520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.416 [2024-10-08 18:33:57.453553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.416 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.453671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.453699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.453815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.453843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.453971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.453999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.454146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.454175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.454297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.454325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.454441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.454469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.454558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.454586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.454665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.454692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.454804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.454831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.454912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.454939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.455067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.455108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.455231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.455260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.455361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.455389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.455482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.455510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.455626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.455654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.455772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.455799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.455893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.455922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.456007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.456034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.456196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.456237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.456367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.456397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.456485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.456514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.456596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.456624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.456762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.456789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.456908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.456936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.457042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.457071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.457183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.457210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.457336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.457364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.457506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.457534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.457643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.457670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.457756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.457784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.457896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.457924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.458057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.458097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.458243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.458273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.458396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.458426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.458541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.458569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.458650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.458678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.458776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.458805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.417 [2024-10-08 18:33:57.458947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.417 [2024-10-08 18:33:57.458975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.417 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.459070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.459097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.459216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.459253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.459356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.459384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.459462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.459489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.459646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.459673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.459761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.459789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.459878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.459905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.460022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.460051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.460172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.460200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.460285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.460319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.460427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.460454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.460565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.460593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.460701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.460728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.460819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.460847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.460955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.460983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.461098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.461125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.461238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.461265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.461423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.461451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.461535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.461562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.461680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.461708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.461822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.461849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.461960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.461988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.462105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.462135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.462300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.462341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.462492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.462522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.462637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.462665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.462754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.462782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.462903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.462932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.463073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.463105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.463249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.463276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.463375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.463403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.463545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.463573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.463711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.463738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.463833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.463861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.463952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.463980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.464091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.464118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.464231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.464258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.464400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.464440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.464579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.464619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.464744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.418 [2024-10-08 18:33:57.464773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.418 qpair failed and we were unable to recover it. 00:26:39.418 [2024-10-08 18:33:57.464865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.464894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.464983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.465010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.465111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.465140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.465258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.465294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.465387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.465415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.465509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.465537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.465621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.465648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.465784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.465811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.465904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.465931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.466060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.466101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.466227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.466268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.466386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.466417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.466534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.466562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.466673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.466701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.466787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.466814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.466907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.466937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.467066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.467107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.467255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.467284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.467409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.467437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.467579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.467607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.467721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.467753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.467875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.467912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.468012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.468041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.468159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.468188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.468275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.468308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.468418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.468447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.468561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.468589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.468697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.468724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.468840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.468873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.468986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.469015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.469134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.469161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.469293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.469334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.469456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.469486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.469574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.469602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.469693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.469721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.469832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.469859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.469978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.470006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.470119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.470147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.470261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.470294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.470376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.419 [2024-10-08 18:33:57.470404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.419 qpair failed and we were unable to recover it. 00:26:39.419 [2024-10-08 18:33:57.470522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.470551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.470678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.470705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.470834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.470861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.470969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.470996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.471076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.471104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.471197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.471237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.471335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.471365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.471461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.471488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.471577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.471604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.471719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.471747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.471861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.471889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.472015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.472044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.472181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.472209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.472295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.472323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.472451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.472478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.472603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.472635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.472749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.472777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.472860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.472889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.472966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.472994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.473080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.473107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.473190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.473216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.473323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.473364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.473485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.473515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.473635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.473664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.473806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.473833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.473930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.473957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.474041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.474069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.474175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.474203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.474315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.474343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.474477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.474505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.474584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.474611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.474727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.474754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.474901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.474929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.475014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.475043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.475122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.475150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.475246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.475277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.475375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.475403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.475546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.475574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.475688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.475716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.475806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.475836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.420 qpair failed and we were unable to recover it. 00:26:39.420 [2024-10-08 18:33:57.475922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.420 [2024-10-08 18:33:57.475950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.476083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.476124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.476245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.476279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.476388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.476416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.476510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.476538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.476621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.476649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.476762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.476790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.476904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.476932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.477062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.477103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.477207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.477237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.477363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.477392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.477481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.477509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.477598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.477626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.477719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.477747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.477859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.477888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.477996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.478023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.478108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.478136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.478222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.478249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.478375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.478403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.478513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.478540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.478658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.478687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.478765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.478792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.478903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.478931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.479072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.479099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.479186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.479213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.479311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.479340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.479422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.479450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.479537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.479564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.479667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.479695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.479786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.479816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.479911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.479938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.480051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.480079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.480192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.480220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.480358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.480386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.480538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.480579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.480697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.480727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.480821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.421 [2024-10-08 18:33:57.480850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.421 qpair failed and we were unable to recover it. 00:26:39.421 [2024-10-08 18:33:57.480962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.480990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.481131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.481159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.481244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.481272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.481372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.481401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.481543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.481572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.481686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.481714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.481803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.481830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.481915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.481943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.482027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.482055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.482181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.482211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.482307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.482336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.482425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.482452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.482539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.482567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.482682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.482710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.482853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.482881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.482973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.483002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.483081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.483110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.483249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.483278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.483410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.483438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.483538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.483566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.483708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.483736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.483851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.483881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.483993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.484021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.484135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.484163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.484276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.484312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.484426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.484454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.484567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.484595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.484736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.484764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.484855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.484883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.484973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.485000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.485116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.485143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.485280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.485329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.485425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.485460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.485551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.485579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.485701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.485730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.485849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.485877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.486023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.486063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.486155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.486184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.486268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.486303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.486416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.422 [2024-10-08 18:33:57.486445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.422 qpair failed and we were unable to recover it. 00:26:39.422 [2024-10-08 18:33:57.486557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.486585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.486696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.486724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.486807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.486834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.486954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.486985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.487079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.487108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.487229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.487259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.487381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.487408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.487522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.487549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.487666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.487693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.487807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.487835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.487924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.487951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.488100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.488127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.488242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.488270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.488391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.488419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.488507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.488534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.488624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.488652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.488790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.488818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.488931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.488958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.489067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.489095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.489212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.489245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.489369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.489400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.489546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.489574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.489688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.489716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.489857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.489886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.489999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.490027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.490117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.490145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.490230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.490258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.490380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.490408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.490496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.490524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.490633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.490660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.490792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.490820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.490914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.490941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.491062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.491091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.491196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.491236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.491371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.491401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.491492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.491521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.491639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.491667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.491758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.491786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.491902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.491930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.492070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.423 [2024-10-08 18:33:57.492098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.423 qpair failed and we were unable to recover it. 00:26:39.423 [2024-10-08 18:33:57.492226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.492253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.492363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.492391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.492515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.492556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.492678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.492708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.492801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.492830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.492926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.492954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.493068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.493101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.493213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.493241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.493339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.493368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.493484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.493511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.493602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.493632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.493726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.493754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.493868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.493896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.494037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.494065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.494191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.494220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.494313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.494341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.494459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.494488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.494574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.494602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.494676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.494703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.494811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.494840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.494970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.494999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.495121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.495150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.495239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.495266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.495357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.495384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.495471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.495499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.495579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.495607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.495719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.495746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.495834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.495862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.495995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.496023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.496114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.496143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.496252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.496280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.496426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.496454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.496596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.496623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.496766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.496794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.496917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.496944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.497086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.497115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.497230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.497259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.497358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.497387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.497502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.497530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.497648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.424 [2024-10-08 18:33:57.497676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.424 qpair failed and we were unable to recover it. 00:26:39.424 [2024-10-08 18:33:57.497783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.497810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.497947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.497976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.498093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.498121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.498209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.498235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.498332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.498361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.498444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.498473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.498564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.498596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.498718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.498747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.498837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.498865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.498980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.499007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.499129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.499157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.499248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.499277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.499384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.499424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.499542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.499572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.499661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.499689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.499789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.499817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.499954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.499982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.500070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.500098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.500196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.500225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.500310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.500338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.500438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.500466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.500556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.500584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.500696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.500723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.500815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.500842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.500954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.500981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.501096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.501124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.501215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.501243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.501361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.501389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.501477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.501505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.501596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.501624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.501751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.501779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.501894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.501922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.502042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.502070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.425 qpair failed and we were unable to recover it. 00:26:39.425 [2024-10-08 18:33:57.502198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.425 [2024-10-08 18:33:57.502238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.502343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.502372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.502462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.502491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.502578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.502606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.502688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.502715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.502823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.502851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.502968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.502995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.503113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.503140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.503261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.503297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.503380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.503408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.503522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.503550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.503665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.503693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.503780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.503809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.503906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.503934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.504049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.504076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.504175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.504203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.504351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.504393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.504488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.504517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.504612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.504640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.504726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.504753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.504864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.504892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.504980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.505007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.505090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.505118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.505204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.505232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.505309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.505337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.505428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.505456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.505540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.505568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.505715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.505743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.505834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.505862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.505977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.506005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.506093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.506124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.506212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.506239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.506363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.506392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.506478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.506505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.506618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.506646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.506742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.506770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.506854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.506883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.507011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.507041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.507149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.507176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.426 [2024-10-08 18:33:57.507304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.426 [2024-10-08 18:33:57.507332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.426 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.507444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.507478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.507576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.507604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.507691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.507720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.507810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.507838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.507980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.508008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.508117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.508145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.508232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.508260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.508365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.508394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.508507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.508534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.508617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.508644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.508753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.508780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.508899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.508926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.509015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.509042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.509124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.509151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.509247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.509277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.509401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.509429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.509540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.509568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.509653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.509682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.509767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.509795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.509914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.509942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.510029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.510057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.510171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.510198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.510317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.510345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.510434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.510460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.510549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.510577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.510687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.510715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.510802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.510830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.510920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.510948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.511062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.511090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.511182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.511209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.511308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.511350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.511452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.511483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.511572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.511600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.511693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.511721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.511832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.511860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.511941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.511968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.512055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.512084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.512211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.512239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.512357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.427 [2024-10-08 18:33:57.512385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.427 qpair failed and we were unable to recover it. 00:26:39.427 [2024-10-08 18:33:57.512477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.512504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.512615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.512643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.512739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.512768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.512852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.512879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.512966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.512994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.513102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.513130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.513241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.513269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.513367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.513394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.513512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.513540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.513655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.513682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.513770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.513798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.513895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.513922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.514040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.514067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.514159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.514187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.514302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.514330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.514463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.514492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.514616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.514645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.514734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.514762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.514855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.514883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.514996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.515024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.515122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.515163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.515285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.515331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.515455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.515483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.515560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.515587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.515705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.515733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.515824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.515851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.515969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.515997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.516104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.516144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.516249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.516283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.516417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.516445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.516559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.516587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.516678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.516705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.516798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.516826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.516939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.516966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.517056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.517084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.517206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.517235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.517335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.517365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.517487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.517514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.517624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.517652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.517767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.428 [2024-10-08 18:33:57.517795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.428 qpair failed and we were unable to recover it. 00:26:39.428 [2024-10-08 18:33:57.517878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.517905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.517993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.518022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.518129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.518157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.518273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.518311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.518425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.518453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.518559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.518599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.518735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.518765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.518854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.518883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.518997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.519025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.519138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.519166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.519266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.519321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.519448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.519477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.519561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.519589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.519669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.519697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.519816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.519845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.519928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.519960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.520066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.520093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.520185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.520212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.520330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.520359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.520440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.520470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.520565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.520593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.520707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.520734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.520814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.520842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.520929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.520958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.521046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.521073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.521192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.521220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.521311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.521340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.521453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.521481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.521558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.521584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.521680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.521709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.521789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.521816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.521930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.521958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.522066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.522093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.522187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.522215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.522331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.522359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.522473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.522500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.522630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.522657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.522776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.522803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.522918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.522945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.429 [2024-10-08 18:33:57.523038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.429 [2024-10-08 18:33:57.523066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.429 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.523182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.523211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.523324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.523365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.523516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.523562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.523719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.523749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.523843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.523871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.523962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.523990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.524082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.524110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.524208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.524250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.524384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.524426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.524529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.524558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.524701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.524729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.524818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.524846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.524938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.524966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.525082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.525111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.525211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.525251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.525381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.525418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.525512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.525541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.525626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.525654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.525752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.525781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.525873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.525902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.526022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.526050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.526190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.526218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.526354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.526382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.526465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.526493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.526585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.526612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.526728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.526757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.526842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.526871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.526985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.527013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.527124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.527152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.527251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.527279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.527381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.527409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.527527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.527554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.527668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.527695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.527777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.527804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.430 [2024-10-08 18:33:57.527917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.430 [2024-10-08 18:33:57.527943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.430 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.528062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.528090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.528168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.528196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.528313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.528342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.528452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.528493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.528607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.528637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.528776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.528804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.528883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.528911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.529003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.529037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.529130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.529159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.529269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.529303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.529443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.529471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.529558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.529586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.529678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.529705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.529818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.529845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.529961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.529990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.530075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.530104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.530217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.530245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.530367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.530395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.530485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.530513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.530627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.530654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.530745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.530770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.530887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.530914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.531035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.531062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.531143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.531170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.531285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.531319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.531415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.531443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.531645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.531674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.531765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.531793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.531934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.531961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.532051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.532080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.532165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.532193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.532281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.532318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.532407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.532435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.532552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.532579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.532665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.532691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.532802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.532829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.532945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.532974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.533089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.533117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.533234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.431 [2024-10-08 18:33:57.533261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.431 qpair failed and we were unable to recover it. 00:26:39.431 [2024-10-08 18:33:57.533356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.533385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.533473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.533501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.533590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.533618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.533733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.533761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.533875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.533902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.533989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.534017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.534108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.534136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.534253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.534280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.534374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.534408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.534617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.534644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.534763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.534791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.534879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.534907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.534997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.535024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.535137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.535165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.535257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.535284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.535381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.535408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.535496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.535523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.535616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.535644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.535760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.535788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.535898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.535925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.536045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.536072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.536281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.536317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.536420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.536448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.536537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.536565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.536685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.536713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.536795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.536822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.536914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.536941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.537029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.537057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.537196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.537224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.537315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.537343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.537426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.537454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.537542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.537568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.537675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.537703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.537784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.537811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.537901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.537928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.538023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.538050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.538142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.538170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.538283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.538316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.538402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.538430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.432 [2024-10-08 18:33:57.538517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.432 [2024-10-08 18:33:57.538544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.432 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.538661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.538688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.538772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.538798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.538915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.538943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.539028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.539055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.539161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.539202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.539307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.539338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.539431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.539460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.539549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.539577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.539666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.539699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.539810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.539838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.539977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.540005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.540087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.540114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.540194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.540221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.540312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.540340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.540437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.540464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.540550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.540578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.540692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.540720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.540805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.540832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.540919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.540948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.541084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.541111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.541198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.541225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.541314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.541342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.541442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.541470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.541608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.541635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.541752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.541778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.541864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.541892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.541978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.542006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.542101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.542129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.542223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.542252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.542371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.542411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.542506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.542536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.542649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.542677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.542769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.542797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.542936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.542964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.543053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.543081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.543184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.543231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.543361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.543391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.543477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.543504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.543593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.543621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.543706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.543734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.543847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.543875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.433 [2024-10-08 18:33:57.543959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.433 [2024-10-08 18:33:57.543988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.433 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.544081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.544109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.544219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.544248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.544338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.544366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.544452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.544479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.544558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.544585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.544712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.544740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.544818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.544846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.544936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.544964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.545043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.545071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.545164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.545192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.545272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.545307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.545417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.545444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.545528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.545556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.545669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.545696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.545781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.545808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.545888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.545915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.546003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.546030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.546150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.546178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.546295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.546323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.546415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.546442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.546558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.546587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.546710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.546737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.546856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.546883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.547002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.547028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.547142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.547169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.547308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.547348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.547444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.547475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.547574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.547603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.547717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.547745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.547839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.547868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.547965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.547993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.548080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.548107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.548198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.548227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.548318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.548346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.548441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.548469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.548555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.434 [2024-10-08 18:33:57.548582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.434 qpair failed and we were unable to recover it. 00:26:39.434 [2024-10-08 18:33:57.548673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.548700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.548792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.548821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.548941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.548971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.549087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.549115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.549206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.549233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.549350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.549377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.549516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.549544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.549655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.549683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.549794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.549821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.549914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.549941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.550039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.550068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.550191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.550220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.550329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.550358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.550440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.550467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.550557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.550585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.550695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.550722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.550838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.550865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.550947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.550975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.551067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.551095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.551191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.551219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.551321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.551350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.551436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.551464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.551574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.551602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.551689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.551716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.551806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.551840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.551947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.551975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.552089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.552116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.552201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.552229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.552310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.552339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.552442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.435 [2024-10-08 18:33:57.552482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.435 qpair failed and we were unable to recover it. 00:26:39.435 [2024-10-08 18:33:57.552624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.552652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.552747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.552776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.552897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.552924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.553017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.553045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.553131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.553158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.553260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.553295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.553413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.553440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.553528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.553555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.553648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.553675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.553765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.553793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.553875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.553903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.554017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.554046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.554181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.554208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.554300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.554328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.554444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.554471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.554556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.554583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.554691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.554719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.554810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.554837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.554956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.554983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.555135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.555176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.555312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.555341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.555435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.555468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.555566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.555594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.555702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.555729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.555818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.555845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.555958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.555986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.556083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.556114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.556205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.556232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.556357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.556385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.556475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.556502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.556599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.556627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.556725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.556753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.556844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.556873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.556964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.556992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.557078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.557105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.557232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.557260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.557361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.557389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.557473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.557500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.557614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.557641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.557753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.436 [2024-10-08 18:33:57.557780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.436 qpair failed and we were unable to recover it. 00:26:39.436 [2024-10-08 18:33:57.557870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.557899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.558011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.558040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.558129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.558156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.558273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.558317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.558414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.558442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.558547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.558575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.558690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.558717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.558829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.558856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.558971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.558999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.559093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.559120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.559201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.559229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.559310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.559337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.559420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.559448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.559538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.559566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.559680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.559707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.559848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.559875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.559961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.559988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.560080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.560107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.560206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.560233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.560361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.560402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.560527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.560556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.560703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.560739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.560821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.560848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.560934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.560961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.561045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.561072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.561180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.561206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.561303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.561332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.561422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.561449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.561534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.561561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.561683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.561710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.561799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.561827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.561971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.561999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.562088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.562117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.562226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.562267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.562380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.562410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.562505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.562532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.562646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.562673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.562792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.562819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.562935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.437 [2024-10-08 18:33:57.562962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.437 qpair failed and we were unable to recover it. 00:26:39.437 [2024-10-08 18:33:57.563046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.563073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.563178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.563219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.563327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.563371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.563458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.563486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.563572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.563599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.563685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.563713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.563829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.563857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.563956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.563984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.564128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.564155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.564239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.564272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.564376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.564404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.564493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.564522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.564636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.564664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.564779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.564806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.564897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.564925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.565068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.565108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.565220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.565260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.565387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.565431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.565533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.565561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.565656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.565683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.565773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.565801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.565912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.565939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.566029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.566056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.566168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.566209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.566334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.566365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.566480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.566508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.566633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.566661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.566743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.566771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.566856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.566884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.566970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.566998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.567085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.567114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.567220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.567260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.567362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.567391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.567482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.567511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.567605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.567633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.567743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.567771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.567893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.567926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.568020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.568049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.438 [2024-10-08 18:33:57.568130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.438 [2024-10-08 18:33:57.568158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.438 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.568248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.568275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.568365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.568392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.568505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.568532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.568622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.568649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.568766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.568793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.568876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.568904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.568987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.569015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.569098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.569124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.569237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.569264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.569368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.569396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.569470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.569498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.569588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.569616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.569730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.569758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.569879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.569910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.570023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.570050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.570139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.570167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.570247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.570274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.570374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.570402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.570488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.570516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.570600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.570628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.570708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.570735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.570851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.570878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.570993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.571020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.571155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.571195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.571298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.571329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.571417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.571446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.571537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.571565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.571680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.571708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.571801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.571830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.571928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.571956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.572038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.572064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.572175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.572203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.572318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.572346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.439 qpair failed and we were unable to recover it. 00:26:39.439 [2024-10-08 18:33:57.572438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.439 [2024-10-08 18:33:57.572466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.572557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.572584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.572670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.572698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.572797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.572824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.572905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.572937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.573052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.573078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.573168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.573197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.573296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.573327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.573416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.573445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.573538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.573567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.573678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.573706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.573816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.573843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.573960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.573987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.574093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.574122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.574248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.574297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.574418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.574447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.574568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.574595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.574682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.574710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.574808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.574836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.574953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.574980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.575091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.575120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.575206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.575235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.575354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.575384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.575474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.575502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.575584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.575612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.575725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.575753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.575846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.575874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.575982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.576010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.576091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.576118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.576212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.576239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.576361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.576389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.576508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.576538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.576656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.576684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.576768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.576796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.576916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.576944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.577049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.577090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.577218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.577249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.577372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.577401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.577516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.577544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.577645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.440 [2024-10-08 18:33:57.577673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.440 qpair failed and we were unable to recover it. 00:26:39.440 [2024-10-08 18:33:57.577766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.577795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.577877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.577905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.578019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.578047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.578131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.578159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.578252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.578282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.578422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.578450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.578553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.578581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.578666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.578693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.578832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.578859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.578966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.578995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.579117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.579144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.579239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.579267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.579389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.579416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.579510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.579538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.579627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.579656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.579746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.579774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.579868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.579897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.580040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.580068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.580161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.580191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.580282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.580318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.580410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.580437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.580530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.580558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.580643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.580671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.580812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.580840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.580963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.580992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.581103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.581132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.581220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.581247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.581369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.581397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.581485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.581513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.581633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.581660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.581746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.581773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.581861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.581894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.581993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.582020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.582104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.582133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.582248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.582277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.582421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.582449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.582536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.582563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.582705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.582732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.582817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.582844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.582939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.441 [2024-10-08 18:33:57.582967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.441 qpair failed and we were unable to recover it. 00:26:39.441 [2024-10-08 18:33:57.583088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.583117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.583234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.583261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.583366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.583395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.583488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.583515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.583593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.583622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.583718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.583746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.583833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.583861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.583956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.583984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.584137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.584165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.584281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.584318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.584430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.584457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.584536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.584564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.584652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.584679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.584765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.584792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.584872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.584900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.584979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.585008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.585101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.585129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.585280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.585316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.585409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.585441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.585533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.585561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.585648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.585676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.585766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.585794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.585879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.585907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.586011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.586038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.586123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.586151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.586245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.586272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.586379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.586420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.586524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.586564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.586710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.586738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.586831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.586859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.586941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.586968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.587056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.587083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.587183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.587211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.587299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.587328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.587412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.587439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.587531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.587560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.587657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.587686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.587775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.587803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.587899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.587927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.588028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.442 [2024-10-08 18:33:57.588069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.442 qpair failed and we were unable to recover it. 00:26:39.442 [2024-10-08 18:33:57.588193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.588222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.588313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.588342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.588425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.588452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.588539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.588567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.588648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.588676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.588765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.588794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.588879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.588906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.589001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.589029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.589140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.589168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.589281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.589314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.589428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.589456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.589554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.589582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.589670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.589698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.589807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.589834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.589950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.589977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.590094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.590122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.590212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.590240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.590375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.590404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.590502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.590530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.590625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.590654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.590735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.590763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.590857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.590885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.590977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.591007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.591124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.591152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.591269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.591303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.591396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.591424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.591509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.591538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.591624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.591652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.591746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.591774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.591892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.591920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.592029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.592055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.592137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.592163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.592261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.592294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.592403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.592430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.592522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.443 [2024-10-08 18:33:57.592548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.443 qpair failed and we were unable to recover it. 00:26:39.443 [2024-10-08 18:33:57.592633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.592660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.592753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.592779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.592888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.592914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.593053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.593079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.593167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.593193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.593281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.593316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.593422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.593461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.593554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.593582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.593695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.593722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.593818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.593845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.593934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.593969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.594052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.594079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.594189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.594215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.595296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.595330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.595432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.595459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.595557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.595583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.595674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.595701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.595790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.595818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.595939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.595966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.596082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.596110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.596206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.596247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.596358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.596388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.596474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.596502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.596590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.596618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.596702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.596730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.596819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.596847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.596939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.596967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.597105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.597146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.597272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.597314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.597450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.597479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.597568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.597595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.597685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.597713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.597825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.597852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.597940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.597969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.598059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.598087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.598168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.598196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.598272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.598309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.598401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.598447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.598532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.598559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.444 qpair failed and we were unable to recover it. 00:26:39.444 [2024-10-08 18:33:57.598639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.444 [2024-10-08 18:33:57.598666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.598751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.598778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.598916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.598944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.599019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.599048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.599157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.599185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.599314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.599354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.599477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.599507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.599708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.599736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.599845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.599873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.599990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.600018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.600129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.600157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.600273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.600316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.600415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.600444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.600580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.600607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.600760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.600787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.600908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.600935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.601028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.601055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.601143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.601171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.601314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.601355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.601447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.601476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.601572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.601600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.601742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.601770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.601864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.601892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.601988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.602017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.602104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.602131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.602244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.602271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.602390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.602418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.602498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.602525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.602638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.602665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.602754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.602782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.602864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.602893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.603011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.603038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.603148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.603178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.603258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.603308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.603444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.603471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.603578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.603606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.603721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.603749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.603864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.603892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.604005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.604033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.445 [2024-10-08 18:33:57.604130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.445 [2024-10-08 18:33:57.604159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.445 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.604301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.604330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.604419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.604447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.604529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.604556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.604642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.604669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.604759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.604788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.604873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.604901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.605010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.605038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.605146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.605174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.605297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.605326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.605441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.605468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.605552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.605579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.605692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.605719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.605832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.605860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.605946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.605973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.606086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.606113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.606227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.606254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.606378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.606407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.606499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.606526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.606615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.606642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.606720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.606748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.606862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.606891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.606978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.607005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.607121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.607149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.607232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.607259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.607379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.607408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.607495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.607528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.607612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.607640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.607727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.607754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.607846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.607874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.607953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.607980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.608097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.608125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.608244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.608272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.608396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.608425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.608505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.608532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.608633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.608660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.608750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.608777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.608889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.608916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.609035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.609065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.609181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.609208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.446 qpair failed and we were unable to recover it. 00:26:39.446 [2024-10-08 18:33:57.609315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.446 [2024-10-08 18:33:57.609343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.609458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.609485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.609579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.609607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.609743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.609771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.609885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.609912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.610059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.610088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.610178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.610208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.610350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.610379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.610508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.610535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.610623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.610650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.610757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.610784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.610904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.610933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.611044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.611072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.611197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.611243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.611351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.611380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.611504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.611531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.611644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.611672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.611756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.611784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.611891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.611918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.611999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.612027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.612124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.612165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.612283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.612319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.612401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.612429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.612519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.612547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.612661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.612690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.612778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.612805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.612895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.612924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.613010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.613039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.613134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.613164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.613254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.613282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.613401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.613429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.613517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.613545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.613653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.613681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.613774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.613801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.613882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.613910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.447 [2024-10-08 18:33:57.614028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.447 [2024-10-08 18:33:57.614058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.447 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.614176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.614204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.614297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.614326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.614467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.614495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.614583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.614610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.614704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.614733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.614816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.614844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.614952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.614979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.615090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.615117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.615199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.615227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.615335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.615362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.615478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.615505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.615590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.615620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.615715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.615743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.615862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.615890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.615975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.616003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.616117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.616145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.616264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.616298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.616446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.616479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.616590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.616617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.616740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.616768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.616908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.616936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.617052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.617079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.617195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.617224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.617345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.617373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.617488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.617516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.617633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.617661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.617745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.617772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.617887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.617915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.618027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.618057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.618171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.618198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.618326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.618354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.618453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.618481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.618575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.618602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.618737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.618765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.618884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.618913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.619028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.619055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.619199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.619226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.619343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.619371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.448 [2024-10-08 18:33:57.619484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.448 [2024-10-08 18:33:57.619512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.448 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.619625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.619653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.619764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.619791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.619880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.619909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.619997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.620024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.620173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.620201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.620301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.620343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.620441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.620470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.620611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.620638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.620778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.620807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.620922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.620950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.621042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.621072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.621187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.621214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.621324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.621353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.621445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.621472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.621589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.621617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.621734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.621761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.621890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.621918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.622040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.622069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.622157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.622185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.622282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.622316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.622457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.622487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.622632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.622660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.622795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.622823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.622934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.622962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.623072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.623099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.623219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.623246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.623339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.623368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.623475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.623503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.623642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.623669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.623798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.623826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.623935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.623963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.624081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.624110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.624208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.624236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.624350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.624378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.624496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.624524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.624618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.624646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.624731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.624758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.624866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.624893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.625032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.449 [2024-10-08 18:33:57.625060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.449 qpair failed and we were unable to recover it. 00:26:39.449 [2024-10-08 18:33:57.625196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.625223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.625353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.625386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.625492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.625533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.625649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.625679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.625760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.625788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.625931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.625958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.626078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.626112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.626231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.626258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.626368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.626397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.626486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.626514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.626626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.626654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.626768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.626795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.626932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.626960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.627056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.627085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.627166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.627195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.627294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.627322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.627402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.627429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.627541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.627569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.627677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.627704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.627820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.627848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.627969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.627997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.628079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.628107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.628216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.628244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.628344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.628372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.628485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.628513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.628631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.628657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.628763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.628789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.628878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.628906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.628983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.629010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.629103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.629131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.629242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.629269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.629397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.629425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.629508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.629535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.629656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.629684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.629823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.629850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.629935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.629963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.630104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.630130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.630223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.630251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.630383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.630425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.450 qpair failed and we were unable to recover it. 00:26:39.450 [2024-10-08 18:33:57.630542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.450 [2024-10-08 18:33:57.630570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.630713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.630742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.630858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.630887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.631000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.631027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.631141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.631169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.631254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.631283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.631429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.631456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.631553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.631585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.631698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.631726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.631844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.631871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.631957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.631985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.632102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.632129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.632244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.632271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.632367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.632394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.632533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.632561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.632650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.632678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.632792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.632820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.632950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.632979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.633070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.633099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.633258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.633315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.633418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.633448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.633569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.633598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.633738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.633766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.633882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.633909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.634029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.634057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.634173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.634200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.634324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.634354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.634434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.634461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.634572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.634599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.634715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.634743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.634884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.634911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.635038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.635077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.635206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.635236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.635350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.635378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.635521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.635554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.635671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.635698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.635813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.635841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.635952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.451 [2024-10-08 18:33:57.635979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.451 qpair failed and we were unable to recover it. 00:26:39.451 [2024-10-08 18:33:57.636065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.636095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.636236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.636264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.636369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.636397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.636505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.636533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.636619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.636647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.636754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.636781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.636898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.636927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.637013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.637039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.637130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.637171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.637297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.637325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.637451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.637479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.637597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.637624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.637766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.637793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.637881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.637909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.637995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.638024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.638120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.638151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.638238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.638267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.638382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.638410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.638526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.638554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.638671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.638699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.638836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.638863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.638977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.639006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.639122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.639150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.639293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.639329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.639445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.639472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.639560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.639588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.639729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.639757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.639875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.639902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.640009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.640037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.640135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.640163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.640280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.640322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.640442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.640469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.640587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.640615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.640753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.640780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.640891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.640919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.452 qpair failed and we were unable to recover it. 00:26:39.452 [2024-10-08 18:33:57.641037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.452 [2024-10-08 18:33:57.641065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.641197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.641237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.641386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.641417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.641536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.641563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.641642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.641669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.641808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.641835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.641923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.641951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.642070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.642097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.642216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.642244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.642393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.642421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.642531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.642557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.642697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.642724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.642808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.642835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.642948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.642977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.643090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.643117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.643214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.643242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.643370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.643398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.643492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.643521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.643611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.643638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.643776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.643804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.643945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.643972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.644083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.644109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.644239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.644280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.644384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.644413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.644507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.644535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.644644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.644672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.644757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.644784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.644926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.644953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.645079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.645113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.645206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.645233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.645350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.645378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.645495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.645522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.645639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.645665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.645810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.645838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.645959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.645986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.453 qpair failed and we were unable to recover it. 00:26:39.453 [2024-10-08 18:33:57.646111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.453 [2024-10-08 18:33:57.646138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.646274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.646308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.646421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.646448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.646558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.646586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.646731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.646758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.646894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.646921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.647034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.647061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.647153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.647180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.648157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.648190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.648312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.648342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.648424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.648451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.649149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.649185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.649314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.649342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.649462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.649490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.649638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.649666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.649808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.649836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.649980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.650006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.650094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.650122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.650240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.650268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.650400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.650428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.650519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.650547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.650674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.650712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.650844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.650879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.651001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.651029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.651173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.651201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.652048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.652080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.652185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.652214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.652327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.652355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.652501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.652541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.652651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.652679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.652788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.652826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.652919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.652946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.653059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.653086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.653204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.653237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.653363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.653404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.653496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.653526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.653649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.653680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.653806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.653835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.454 qpair failed and we were unable to recover it. 00:26:39.454 [2024-10-08 18:33:57.653966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.454 [2024-10-08 18:33:57.654005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.455 qpair failed and we were unable to recover it. 00:26:39.455 [2024-10-08 18:33:57.654120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.455 [2024-10-08 18:33:57.654158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.455 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.654284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.654322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.654415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.654444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.654550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.654578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.654664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.654692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.654779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.654807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.654919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.654946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.655052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.655080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.655190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.655231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.655370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.655402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.655489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.655522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.655617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.655645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.655736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.655764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.655885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.655913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.655993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.656022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.656104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.656131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.656223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.656263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.656369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.656398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.656512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.656542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.656684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.656712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.656834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.656862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.656967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.657001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.657085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.657115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.657254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.657300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.657387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.657415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.657501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.657528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.657643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.657671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.657811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.657839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.657954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.657982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.658063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.658091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.658216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.658244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.658354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.658382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.658518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.658558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.658713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.658742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.658830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.658858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.658991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.659019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.659138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.659166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.659275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.659314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.659439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.737 [2024-10-08 18:33:57.659466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.737 qpair failed and we were unable to recover it. 00:26:39.737 [2024-10-08 18:33:57.659549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.659577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.659689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.659717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.659835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.659863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.659972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.659999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.660117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.660145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.660262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.660297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.660437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.660465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.660576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.660603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.660696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.660723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.661466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.661503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.661624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.661652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.661767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.661795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.661882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.661909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.661991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.662019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.662126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.662154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.662275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.662317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.662417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.662445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.662576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.662616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.662740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.662770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.662896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.662932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.663048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.663076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.663186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.663214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.663349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.663378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.663504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.663532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.663651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.663679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.663796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.663824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.663944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.663973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.664116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.664144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.664229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.664257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.664378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.664405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.664503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.664544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.664665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.664695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.664824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.664852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.664992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.665020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.665130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.665158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.665264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.665297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.665406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.665434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.665548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.665575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.665693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.665720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.738 qpair failed and we were unable to recover it. 00:26:39.738 [2024-10-08 18:33:57.665800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.738 [2024-10-08 18:33:57.665828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.665944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.665971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.666109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.666137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.666219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.666246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.666347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.666374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.666473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.666503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.666597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.666626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.666737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.666765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.666877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.666905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.667024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.667052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.667134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.667166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.667251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.667278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.667393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.667421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.667532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.667559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.667669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.667697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.667834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.667861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.667976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.668004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.668079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.668107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.668199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.668228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.668369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.668399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.668487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.668516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.668602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.668629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.668741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.668769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.668860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.668890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.669008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.669036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.669112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.669140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.669251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.669279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.669402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.669430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.669540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.669567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.669651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.669679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.669784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.669811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.669925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.669952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.670091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.670119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.670227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.670269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.670385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.670415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.670502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.670531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.670612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.670640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.670723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.670755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.739 qpair failed and we were unable to recover it. 00:26:39.739 [2024-10-08 18:33:57.670874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.739 [2024-10-08 18:33:57.670901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.670990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.671029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.671144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.671172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.671255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.671282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.671397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.671425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.671532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.671559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.671699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.671726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.671824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.671851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.671988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.672015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.672141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.672182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.672334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.672363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.672448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.672476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.672588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.672615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.672740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.672767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.672879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.672906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.673026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.673053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.673166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.673193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.673283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.673317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.673426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.673454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.673566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.673594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.673677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.673705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.673843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.673870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.673961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.673989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.674068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.674096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.674205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.674233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.674342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.674370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.674463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.674491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.674605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.674633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.674713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.674741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.674838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.674867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.674954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.674982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.675118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.675146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.675231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.675258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.675360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.675389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.675506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.675533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.675617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.675646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.675734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.675761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.675843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.675870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.740 [2024-10-08 18:33:57.676004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.740 [2024-10-08 18:33:57.676031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.740 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.676146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.676174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.676319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.676348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.676484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.676511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.676600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.676628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.676733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.676761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.676851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.676879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.676998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.677028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.677135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.677163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.677255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.677283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.677418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.677446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.677583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.677625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.677753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.677784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.677929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.677957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.678071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.678099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.678220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.678248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.678354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.678384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.678473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.678502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.678612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.678639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.678748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.678775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.678864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.678891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.678981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.679009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.679127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.679155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.679279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.679315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.679404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.679432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.679573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.679600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.679718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.679744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.679840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.679868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.679981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.680013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.680123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.680151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.680237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.680265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.680362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.680390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.680481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.680522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.680643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.680673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.741 [2024-10-08 18:33:57.680799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.741 [2024-10-08 18:33:57.680828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.741 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.680905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.680934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.681027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.681056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.681141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.681169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.681295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.681324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.681418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.681446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.681532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.681559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.681652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.681680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.681774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.681801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.681926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.681954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.682060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.682088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.682174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.682201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.682277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.682310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.682426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.682455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.682546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.682575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.682692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.682721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.682811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.682839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.682918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.682946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.683099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.683140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.683263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.683319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.683437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.683466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.683582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.683615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.683735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.683763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.683899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.683927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.684048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.684075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.684218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.684246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.684345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.684373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.684454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.684482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.684564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.684592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.684699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.684727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.684867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.684894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.685013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.685040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.685119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.685146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.685260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.685295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.685379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.685407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.685507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.685534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.685652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.685680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.685806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.685834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.685945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.685972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.742 qpair failed and we were unable to recover it. 00:26:39.742 [2024-10-08 18:33:57.686099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.742 [2024-10-08 18:33:57.686150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.686258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.686305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.686416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.686446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.686532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.686560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.686700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.686734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.686841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.686869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.686985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.687013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.687133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.687164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.687292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.687322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.687439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.687473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.687571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.687599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.687745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.687773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.687893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.687930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.688045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.688073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.688156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.688190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.688311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.688352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.688439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.688466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.688543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.688571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.688687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.688714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.688807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.688834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.688947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.688974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.689112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.689139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.689215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.689244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.689347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.689375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.689483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.689511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.689601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.689629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.689737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.689765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.689890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.689918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.690059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.690087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.690229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.690257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.690383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.690411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.690509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.690537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.690650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.690677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.690759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.690786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.690908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.690935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.691030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.691058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.691159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.691200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.691300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.691341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.691459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.691488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.691605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.743 [2024-10-08 18:33:57.691632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.743 qpair failed and we were unable to recover it. 00:26:39.743 [2024-10-08 18:33:57.691716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.691744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.691853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.691881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.691974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.692001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.692114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.692142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.692238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.692265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.692387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.692416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.692535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.692562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.692653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.692681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.692771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.692798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.692907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.692939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.693048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.693076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.693159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.693187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.693298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.693340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.693464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.693494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.693582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.693610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.693688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.693716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.693839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.693867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.694005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.694033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.694154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.694183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.694313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.694347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.694434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.694461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.694577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.694604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.694716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.694743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.694875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.694903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.694991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.695020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.695136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.695164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.695301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.695352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.695474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.695504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.695593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.695621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.695738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.695766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.695886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.695913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.695998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.696025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.696115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.696143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.696292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.696321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.696440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.696469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.696595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.696636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.696762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.696796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.696912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.696941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.697036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.697063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.744 qpair failed and we were unable to recover it. 00:26:39.744 [2024-10-08 18:33:57.697145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.744 [2024-10-08 18:33:57.697173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.697298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.697335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.697482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.697510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.697618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.697646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.697728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.697756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.697879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.697906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.698019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.698047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.698192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.698220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.698356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.698397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.698543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.698572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.698659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.698686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.698807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.698834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.698957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.698984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.699073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.699100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.699215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.699244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.699364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.699392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.699508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.699536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.699624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.699651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.699743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.699770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.699887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.699915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.700064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.700091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.700204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.700231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.700340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.700368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.700504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.700532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.700614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.700646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.700736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.700764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.700840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.700868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.700975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.701003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.701118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.701146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.701257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.701296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.701416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.701443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.701518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.701545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.701663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.701690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.701802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.701829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.701916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.701943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.745 [2024-10-08 18:33:57.702056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.745 [2024-10-08 18:33:57.702083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.745 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.702199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.702227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.702365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.702393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.702517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.702552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.702669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.702697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.702836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.702863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.702981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.703010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.703119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.703147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.703260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.703296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.703442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.703470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.703580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.703608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.703723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.703751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.703863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.703891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.704006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.704034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.704128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.704156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.704274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.704315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.704411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.704443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.704568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.704596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.704737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.704765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.704903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.704931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.705046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.705074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.705187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.705214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.705329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.705370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.705474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.705503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.705593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.705622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.705735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.705763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.705852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.705880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.705991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.706019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.706104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.706132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.706228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.706256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.706356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.706386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.706505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.706532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.706656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.706683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.706796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.706823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.706944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.706973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.707065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.707093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.707181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.707209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.707336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.707365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.707474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.707502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.707600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.707627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.746 [2024-10-08 18:33:57.707722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.746 [2024-10-08 18:33:57.707749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.746 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.707887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.707915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.708048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.708076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.708176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.708206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.708321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.708361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.708496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.708523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.708636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.708663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.708783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.708809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.708895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.708922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.709032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.709060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.709136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.709163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.709279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.709315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.709412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.709440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.709556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.709584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.709690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.709717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.709832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.709859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.709972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.710004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.710096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.710123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.710252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.710303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.710424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.710453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.710570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.710605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.710690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.710718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.710829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.710857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.710962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.710989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.711102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.711131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.711208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.711235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.711359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.711388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.711481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.711507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.711602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.711629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.711742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.711769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.711892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.711920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.712066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.712094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.712199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.712227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.712342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.712371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.712461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.712488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.712629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.712657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.712754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.712783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.712896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.712923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.713014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.713042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.713159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.713186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.747 qpair failed and we were unable to recover it. 00:26:39.747 [2024-10-08 18:33:57.713340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.747 [2024-10-08 18:33:57.713368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.713449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.713477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.713584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.713613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.713736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.713768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.713874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.713902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.714015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.714042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.714125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.714153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.714240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.714269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.714396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.714425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.714537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.714567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.714706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.714733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.714853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.714880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.714973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.715001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.715120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.715147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.715262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.715298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.715464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.715491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.715606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.715634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.715753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.715780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.715868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.715895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.716037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.716064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.716174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.716215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.716308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.716349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.716435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.716463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.716580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.716608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.716726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.716754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.716841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.716869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.716951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.716979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.717087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.717116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.717208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.717236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.717358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.717387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.717484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.717513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.717595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.717623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.717780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.717808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.717902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.717930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.718045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.718074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.718181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.718209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.718296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.718325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.718423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.718450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.718569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.718609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.718710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.748 [2024-10-08 18:33:57.718739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.748 qpair failed and we were unable to recover it. 00:26:39.748 [2024-10-08 18:33:57.718839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.718867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.718983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.719010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.719087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.719114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.719193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.719220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.719354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.719382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.719525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.719553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.719671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.719700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.719817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.719845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.719961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.719989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.720098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.720126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.720243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.720280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.720405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.720433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.720570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.720598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.720714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.720741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.720856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.720883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.720997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.721024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.721142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.721170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.721281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.721315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.721416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.721445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.721528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.721563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.721644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.721671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.721771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.721810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.721932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.721961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.722083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.722110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.722252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.722279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.722407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.722435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.722525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.722563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.722656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.722685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.722807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.722835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.722977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.723004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.723121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.723157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.723244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.723273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.723379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.723415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.723523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.723560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.723653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.723681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.723797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.723825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.723939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.723969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.724081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.724122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.724266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.724302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.724458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.749 [2024-10-08 18:33:57.724486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.749 qpair failed and we were unable to recover it. 00:26:39.749 [2024-10-08 18:33:57.724610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.724637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.724756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.724784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.724926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.724955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.725074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.725103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.725229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.725257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.725382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.725409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.725525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.725553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.725671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.725698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.725818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.725845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.725945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.725974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.726068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.726097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.726235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.726263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.726394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.726422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.726535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.726573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.726687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.726715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.726799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.726828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.726914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.726943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.727064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.727109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.727197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.727227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.727332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.727360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.727476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.727504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.727610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.727638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.727752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.727780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.727867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.727895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.728006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.728032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.728176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.728203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.728356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.728386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.728475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.728503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.728601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.728629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.728712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.728739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.728825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.728852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.728941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.728969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.729086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.729115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.729240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.729281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.729409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.729438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.729566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.750 [2024-10-08 18:33:57.729594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.750 qpair failed and we were unable to recover it. 00:26:39.750 [2024-10-08 18:33:57.729714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.729741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.729860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.729888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.729986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.730015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.730133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.730161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.730274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.730307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.730447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.730475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.730571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.730598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.730686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.730713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.730829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.730859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.730983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.731011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.731122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.731149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.731266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.731298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.731402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.731430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.731521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.731559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.731697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.731724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.731863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.731890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.731979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.732007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.732155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.732184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.732339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.732368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.732484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.732511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.732629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.732657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.732773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.732800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.732919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.732947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.733042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.733069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.733153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.733180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.733301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.733341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.733432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.733458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.733570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.733610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.733732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.733761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.733877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.733905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.734046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.734074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.734189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.734216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.734314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.734365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.734491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.734521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.734667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.734695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1361427 Killed "${NVMF_APP[@]}" "$@" 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.734793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.734821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.734941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.734968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.735080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.735107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:39.751 [2024-10-08 18:33:57.735217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.735245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.751 qpair failed and we were unable to recover it. 00:26:39.751 [2024-10-08 18:33:57.735376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.751 [2024-10-08 18:33:57.735405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:39.752 [2024-10-08 18:33:57.735520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.735559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:39.752 [2024-10-08 18:33:57.735650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.735679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:39.752 [2024-10-08 18:33:57.735792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.735819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.735898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.752 [2024-10-08 18:33:57.735925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.736037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.736065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.736219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.736260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.736397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.736427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.736542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.736577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.736694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.736722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.736839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.736867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.736982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.737009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.737130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.737157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.737264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.737297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.737413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.737440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.737532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.737566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.737653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.737681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.737795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.737823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.737918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.737947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.738090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.738118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.738246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.738294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.738424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.738453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.738545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.738572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.738655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.738683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.738802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.738829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.738937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.738976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.739060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.739089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.739185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.739213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.739320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.739355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.739463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.739490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.739606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.739636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.739778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.739806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.739892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.739921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1361983 00:26:39.752 [2024-10-08 18:33:57.740016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.740044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.740137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1361983 00:26:39.752 [2024-10-08 18:33:57.740165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 [2024-10-08 18:33:57.740307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.752 [2024-10-08 18:33:57.740343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.752 qpair failed and we were unable to recover it. 00:26:39.752 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1361983 ']' 00:26:39.753 [2024-10-08 18:33:57.740462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.740491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.753 [2024-10-08 18:33:57.740579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.740608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.740695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.740723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.740838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.740868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.753 [2024-10-08 18:33:57.740961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.740988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:39.753 18:33:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:39.753 [2024-10-08 18:33:57.741190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.741220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.741345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.741375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.741518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.741546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.741740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.741768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.741905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.741933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.742044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.742072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.742160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.742189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.742280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.742317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.742438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.742479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.742608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.742637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.742719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.742748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.742834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.742862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.742953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.742981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.743122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.743151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.743279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.743319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.743414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.743442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.743535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.743564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.743677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.743705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.743843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.743871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.743956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.743985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f816c000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.744075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.744105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.744308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.744337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.744459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.744489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.744581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.744609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.744729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.744758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.744872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.744902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.745093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.745120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.745203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.745231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.745359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.745387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.745527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.745555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.745666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.753 [2024-10-08 18:33:57.745694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.753 qpair failed and we were unable to recover it. 00:26:39.753 [2024-10-08 18:33:57.745801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.745829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.745968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.745996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.746187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.746214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.746410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.746437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.746528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.746554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.746636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.746663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.746773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.746801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.746896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.746923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.747035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.747063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.747210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.747237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.747381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.747419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.747547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.747576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.747673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.747701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.747844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.747871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.747990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.748018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.748139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.748167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.748258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.748294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.748413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.748441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.748533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.748560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.748645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.748671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.748764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.748791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.748875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.748902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.748990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.749018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.749112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.749139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.749238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.749265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.749370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.749400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.749512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.749539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.749626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.749654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.749744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.749771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.749918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.749945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.750025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.750052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.750129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.750157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.750247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.750274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.750371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.750397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.750486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.750513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.754 [2024-10-08 18:33:57.750634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.754 [2024-10-08 18:33:57.750663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.754 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.750755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.750783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.750889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.750918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.751011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.751040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.751129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.751156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.751269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.751301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.751396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.751423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.751512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.751539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.751622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.751651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.751741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.751768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.751862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.751889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.751983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.752011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.752133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.752160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.752250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.752277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.752414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.752440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.752580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.752611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.752710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.752737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.752933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.752959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.753076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.753103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.753189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.753217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.753307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.753336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.753423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.753449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.753564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.753591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.753704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.753732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.753825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.753851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.753943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.753971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.754071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.754099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.754197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.754225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.754323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.754352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.754446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.754473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.754574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.754602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.754691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.754718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.754826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.754852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.754970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.754997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.755078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.755104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.755223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.755251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.755360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.755389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.755504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.755531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.755644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.755672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.755765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.755792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.755905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.755 [2024-10-08 18:33:57.755931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.755 qpair failed and we were unable to recover it. 00:26:39.755 [2024-10-08 18:33:57.756018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.756046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.756137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.756169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.756262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.756296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.756394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.756420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.756505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.756532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.756671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.756698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.756787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.756813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.756928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.756955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.757044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.757071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.757159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.757187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.757277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.757310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.757400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.757426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.757516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.757543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.757664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.757691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.757776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.757803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.757898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.757925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.758021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.758048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.758133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.758160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.758248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.758275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.758379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.758406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.758493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.758521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.758610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.758637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.758732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.758759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.758844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.758872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.758965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.758994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.759083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.759111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.759254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.759280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.759412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.759438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.759522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.759554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.759654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.759682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.759772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.759799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.759916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.759944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.760035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.760062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.760153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.760179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.760298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.760325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.760437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.760464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.760562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.760589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.760701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.760728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.760806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.760832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.760948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.760974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.756 qpair failed and we were unable to recover it. 00:26:39.756 [2024-10-08 18:33:57.761086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.756 [2024-10-08 18:33:57.761113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.761197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.761223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.761319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.761348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.761436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.761463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.761579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.761605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.761691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.761719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.761862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.761888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.762026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.762052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.762165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.762192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.762274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.762308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.762422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.762449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.762529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.762556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.762638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.762665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.762751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.762778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.762890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.762918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.763023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.763051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.763165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.763192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.763273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.763315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.763412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.763439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.763550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.763576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.763693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.763720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.763838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.763864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.763954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.763981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.764095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.764123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.764215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.764242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.764345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.764372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.764458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.764485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.764570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.764597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.764688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.764715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.764814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.764841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.764957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.764984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.765100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.765127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.765219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.765248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.765412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.765439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.765528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.765555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.765666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.765692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.765837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.765863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.765948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.765975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.766088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.766116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.766214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.766241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.757 qpair failed and we were unable to recover it. 00:26:39.757 [2024-10-08 18:33:57.766362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.757 [2024-10-08 18:33:57.766390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.766490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.766516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.766617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.766644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.766732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.766758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.766898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.766927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.767049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.767076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.767166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.767192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.767336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.767363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.767454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.767480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.767568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.767595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.767713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.767741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.767832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.767860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.767944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.767971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.768062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.768089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.768202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.768228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.768355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.768383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.768510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.768538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.768654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.768681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.768765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.768791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.768879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.768905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.768999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.769026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.769137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.769164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.769254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.769281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.769381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.769409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.769546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.769574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.769691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.769717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.769815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.769842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.769958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.769985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.770079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.770106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.770205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.770232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.770333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.770361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.770476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.770502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.770593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.770620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.770709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.770739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.770828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.770856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.770946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.770974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.771111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.771138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.771224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.758 [2024-10-08 18:33:57.771251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.758 qpair failed and we were unable to recover it. 00:26:39.758 [2024-10-08 18:33:57.771354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.771382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.771481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.771508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.771612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.771639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.771742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.771770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.771878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.771904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.772001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.772029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.772146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.772173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.772261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.772298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.772389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.772416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.772528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.772556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.772670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.772696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.772781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.772807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.772902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.772928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.773038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.773064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.773149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.773177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.773265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.773298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.773408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.773435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.773521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.773547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.773654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.773681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.773791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.773818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.773908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.773937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.774029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.774057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.774172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.774199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.774300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.774328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.774416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.774444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.774530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.774556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.774669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.774695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.774828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.774855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.774992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.775019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.775106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.775133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.775224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.775252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.775345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.775377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.775462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.775488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.775579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.775607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.775714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.775741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.775858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.775887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.775980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.776008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.776101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.759 [2024-10-08 18:33:57.776127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.759 qpair failed and we were unable to recover it. 00:26:39.759 [2024-10-08 18:33:57.776242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.776269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.776363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.776391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.776477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.776503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.776622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.776648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.776739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.776765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.776884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.776911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.776997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.777026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.777123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.777151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.777267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.777300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.777417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.777443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.777559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.777586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.777682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.777709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.777827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.777856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.777977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.778004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.778141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.778168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.778258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.778285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.778424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.778451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.778539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.778566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.778658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.778685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.778771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.778798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.778911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.778942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.779023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.779050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.779166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.779192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.779283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.779317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.779401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.779429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.779530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.779556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.779651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.779678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.779770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.779797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.779893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.779919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.780033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.780059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.780143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.780168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.780256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.780283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.780409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.780435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.780521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.780548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.780644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.780672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.780767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.780793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.780883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.780910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.780997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.781023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.781117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.781143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.781241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.781281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.760 [2024-10-08 18:33:57.781415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.760 [2024-10-08 18:33:57.781444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.760 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.781554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.781581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.781700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.781726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.781868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.781895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.781988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.782016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.782098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.782126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.782223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.782252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.782353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.782385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.782471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.782498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.782613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.782640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.782757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.782783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.782864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.782891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.782977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.783005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.783092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.783119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.783225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.783252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.783366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.783394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.783483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.783510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.783602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.783629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.783714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.783740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.783851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.783878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.783966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.783993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.784117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.784143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.784249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.784275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.784386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.784425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.784522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.784551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.784635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.784663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.784747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.784774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.784890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.784918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.785033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.785059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.785141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.785167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.785260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.785296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.785407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.785433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.785513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.785540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.785624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.785651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.785729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.785761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.785883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.785911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.786025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.786052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.786168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.786195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.786285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.786319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.786408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.761 [2024-10-08 18:33:57.786434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.761 qpair failed and we were unable to recover it. 00:26:39.761 [2024-10-08 18:33:57.786551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.786576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.786666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.786693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.786815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.786843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.786927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.786954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.787038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.787064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.787178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.787206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.787311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.787338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.787451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.787477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.787584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.787611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.787727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.787753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.787867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.787895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.787988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.788015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.788123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.788150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.788238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.788263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.788360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.788387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.788476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.788502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.788590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.788616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.788700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.788727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.788802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.788828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.788909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.788937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.789023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.789049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.789130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.789162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.789272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.789307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.789401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.789429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.789545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.789571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.789658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.789685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.789779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.789807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.789918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.789945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.790057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.790085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.790175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.790201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.790224] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:26:39.762 [2024-10-08 18:33:57.790312] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.762 [2024-10-08 18:33:57.790343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.790371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.790454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.790479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.790570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.790596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.790683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.790709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.790803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.790831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.790916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.790941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.791054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.791081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.791197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.791223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.762 [2024-10-08 18:33:57.791349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.762 [2024-10-08 18:33:57.791377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.762 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.791473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.791499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.791581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.791607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.791714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.791740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.791824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.791851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.791966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.791992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.792085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.792115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.792210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.792238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.792353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.792381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.792501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.792530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.792646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.792673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.792764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.792790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.792876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.792903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.792994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.793022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.793108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.793135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.793218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.793246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.793345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.793373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.793470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.793497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.793612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.793639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.793761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.793788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.793873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.793900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.793996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.794023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.794144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.794171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.794266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.794297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.794418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.794445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.794530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.794556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.794664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.794691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.794780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.794807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.794905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.794932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.795048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.795075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.795163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.795191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.795318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.795345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.795440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.795468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.795578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.795604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.795737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.795763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.795847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.795874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.796013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.763 [2024-10-08 18:33:57.796042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.763 qpair failed and we were unable to recover it. 00:26:39.763 [2024-10-08 18:33:57.796136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.796162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.796278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.796310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.796416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.796442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.796541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.796567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.796683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.796709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.796802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.796827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.796944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.796970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.797064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.797090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.797171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.797197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.797310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.797337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.797422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.797448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.797542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.797569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.797666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.797692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.797773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.797798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.797884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.797910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.798050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.798076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.798219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.798245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.798361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.798387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.798478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.798505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.798626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.798651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.798766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.798793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.798909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.798936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.799051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.799078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.799179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.799218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.799321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.799352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.799466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.799494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.799616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.799644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.799749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.799776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.799876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.799901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.799999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.800025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.800146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.800173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.800283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.800326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.800417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.800443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.800538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.800564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.800702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.800728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.800821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.800847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.800943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.800970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.801083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.801110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.801214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.764 [2024-10-08 18:33:57.801254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.764 qpair failed and we were unable to recover it. 00:26:39.764 [2024-10-08 18:33:57.801360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.801390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.801486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.801514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.801643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.801671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.801791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.801818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.801907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.801933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.802050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.802077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.802197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.802223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.802338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.802364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.802479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.802505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.802624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.802651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.802742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.802768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.802855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.802883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.802977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.803005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.803118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.803144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.803295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.803328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.803425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.803451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.803539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.803565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.803655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.803681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.803769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.803796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.803874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.803900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.803987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.804014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.804126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.804152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.804296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.804323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.804418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.804446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.804538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.804565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.804675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.804702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.804816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.804843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.804967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.804995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.805089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.805116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.805203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.805231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.805351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.805378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.805467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.805493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.805608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.805634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.805718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.805745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.805859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.805886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.805970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.805996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.806086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.806112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.806206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.806233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.806329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.806357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.806470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.765 [2024-10-08 18:33:57.806497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.765 qpair failed and we were unable to recover it. 00:26:39.765 [2024-10-08 18:33:57.806609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.806635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.806744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.806771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.806866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.806893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.807005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.807031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.807166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.807206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.807332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.807363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.807482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.807508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.807602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.807629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.807723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.807750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.807861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.807888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.808004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.808033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.808118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.808146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.808235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.808262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.808376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.808404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.808494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.808521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.808644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.808671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.808761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.808788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.808871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.808899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.809017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.809043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.809164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.809193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.809296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.809324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.809420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.809446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.809641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.809667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.809764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.809791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.809878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.809905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.810011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.810037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.810149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.810176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.810373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.810401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.810495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.810522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.810640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.810667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.810751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.810779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.810869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.810895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.810980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.811007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.811097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.811123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.811218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.811247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.811352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.811381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.811502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.811528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.811617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.811643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.811740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.811767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.811858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.766 [2024-10-08 18:33:57.811885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.766 qpair failed and we were unable to recover it. 00:26:39.766 [2024-10-08 18:33:57.811999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.812025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.812117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.812148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.812227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.812254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.812403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.812432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.812553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.812580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.812670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.812697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.812897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.812924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.813015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.813042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.813155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.813181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.813298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.813326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.813419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.813446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.813536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.813563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.813660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.813688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.813802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.813830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.813919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.813947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.814033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.814059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.814177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.814203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.814298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.814326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.814416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.814443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.814538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.814565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.814654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.814681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.814764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.814791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.814880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.814907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.815015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.815042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.815153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.815180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.815269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.815303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.815395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.815422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.815511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.815537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.815652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.815684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.815770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.815797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.815873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.815900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.815978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.816005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.816117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.816144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.816243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.816271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.816399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.816427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.767 qpair failed and we were unable to recover it. 00:26:39.767 [2024-10-08 18:33:57.816512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.767 [2024-10-08 18:33:57.816539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.816620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.816647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.816767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.816794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.816879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.816906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.817020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.817046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.817126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.817153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.817244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.817270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.817422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.817450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.817563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.817590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.817681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.817708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.817817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.817844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.817931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.817959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.818056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.818082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.818171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.818198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.818313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.818341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.818433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.818459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.818576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.818605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.818699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.818726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.818812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.818838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.818926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.818953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.819053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.819079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.819164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.819191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.819282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.819315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.819402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.819428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.819536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.819563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.819658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.819685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.819767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.819794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.819881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.819908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.820000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.820038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.820129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.820155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.820247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.820275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.820385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.820412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.820521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.820548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.820660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.820687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.820808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.820836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.820961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.820988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.821076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.821103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.821192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.821218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.821328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.821356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.821470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.821497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.768 [2024-10-08 18:33:57.821582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.768 [2024-10-08 18:33:57.821608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.768 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.821722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.821749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.821859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.821886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.822010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.822037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.822132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.822159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.822267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.822302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.822393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.822419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.822549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.822577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.822672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.822698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.822812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.822839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.822934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.822961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.823053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.823079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.823175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.823202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.823320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.823348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.823465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.823492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.823583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.823609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.823717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.823744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.823833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.823860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.823945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.823972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.824113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.824139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.824227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.824258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.824360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.824387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.824508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.824534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.824646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.824672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.824788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.824814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.824923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.824949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.825066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.825092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.825221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.825261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.825377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.825407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.825496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.825524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.825606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.825633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.825788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.825814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.825905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.825931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.826044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.826071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.826171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.826198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.826327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.826354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.826443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.826470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.826558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.826585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.826669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.826696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.826789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.826818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.769 qpair failed and we were unable to recover it. 00:26:39.769 [2024-10-08 18:33:57.826941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.769 [2024-10-08 18:33:57.826967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.827084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.827111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.827200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.827226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.827342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.827370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.827459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.827485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.827575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.827600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.827700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.827726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.827847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.827879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.827999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.828027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.828142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.828169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.828263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.828305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.828421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.828447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.828565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.828592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.828677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.828704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.828786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.828813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.828908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.828936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.829047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.829074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.829202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.829229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.829331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.829359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.829442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.829468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.829591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.829617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.829740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.829767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.829858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.829885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.829977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.830005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.830125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.830152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.830267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.830300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.830395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.830421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.830513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.830539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.830618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.830644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.830732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.830758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.830847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.830874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.830980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.831006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.831092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.831119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.831215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.831242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.831400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.831447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8168000b90 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.831550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.831578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.831695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.831722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.831803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.831829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.831924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.831950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.770 [2024-10-08 18:33:57.832042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.770 [2024-10-08 18:33:57.832069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.770 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.832183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.832211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.832318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.832346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.832439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.832466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.832553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.832580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.832697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.832723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.832839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.832865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.832985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.833013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.833104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.833130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.833230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.833256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.833357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.833384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.833493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.833519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.833648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.833674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.833754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.833779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.833863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.833888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.834017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.834044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.834157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.834184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.834278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.834313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.834403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.834431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.834547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.834574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.834695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.834722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.834814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.834841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.834926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.834954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.835067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.835093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.835208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.835235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.835333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.835359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.835443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.835469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.835559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.835586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.835674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.835701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.835787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.835814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.835928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.835955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.836071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.771 [2024-10-08 18:33:57.836097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.771 qpair failed and we were unable to recover it. 00:26:39.771 [2024-10-08 18:33:57.836186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.836213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.836340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.836367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.836452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.836479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.836593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.836624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.836743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.836769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.836890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.836918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.837031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.837056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.837174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.837201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.837304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.837333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.837419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.837445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.837537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.837563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.837672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.837699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.837789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.837816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.837937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.837964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.838070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.838097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.838183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.838209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.838319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.838348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.838469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.838496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.838579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.838605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.838715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.838742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.838826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.838851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.838935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.838962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.839048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.839075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.839161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.839188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.839278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.839311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.839452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.839479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.839571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.839596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.839683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.839709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.839800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.839826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.839918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.839946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.840040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.840071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.840160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.840187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.840281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.840315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.840403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.840429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.772 qpair failed and we were unable to recover it. 00:26:39.772 [2024-10-08 18:33:57.840518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.772 [2024-10-08 18:33:57.840544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.840639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.840666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.840783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.840809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.840891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.840917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.841005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.841030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.841139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.841166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.841253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.841278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.841384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.841411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.841507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.841533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.841679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.841706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.841796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.841824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.841909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.841935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.842033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.842061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.842187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.842214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.842416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.842443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.842538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.842565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.842708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.842735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.842817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.842843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.842932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.842959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.843052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.843079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.843166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.843192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.843295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.843323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.843409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.843436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.843521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.843554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.843639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.843666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.843748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.843776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.843876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.843903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.843990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.844017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.844116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.844156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.844245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.844273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.844381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.844409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.844522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.844548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.844629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.844656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.844745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.844771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.773 qpair failed and we were unable to recover it. 00:26:39.773 [2024-10-08 18:33:57.844857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.773 [2024-10-08 18:33:57.844885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.844982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.845008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.845098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.845124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.845220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.845247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.845343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.845371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.845452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.845479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.845562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.845588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.845704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.845732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.845826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.845852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.845938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.845967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.846080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.846106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.846197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.846224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.846333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.846361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.846448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.846474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.846564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.846590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.846679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.846705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.846800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.846826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.846922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.846948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.847042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.847069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.847148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.847175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.847376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.847403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.847488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.847515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.847632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.847659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.847747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.847774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.847887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.847915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.847996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.848022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.848135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.848161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.848267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.848298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.848381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.848408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.848501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.848527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.848613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.848640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.848733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.848759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.848877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.848903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.848995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.849023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.849160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.849187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.849270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.849303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.849392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.774 [2024-10-08 18:33:57.849419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.774 qpair failed and we were unable to recover it. 00:26:39.774 [2024-10-08 18:33:57.849510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.849537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.849739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.849766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.849910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.849937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.850130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.850157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.850245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.850272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.850357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.850386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.850512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.850538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.850648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.850676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.850764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.850790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.850884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.850910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.851029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.851055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.851145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.851172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.851253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.851280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.851401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.851428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.851540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.851566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.851648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.851675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.851793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.851820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.851915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.851942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.852031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.852057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.852167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.852198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.852296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.852322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.852415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.852442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.852574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.852600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.852715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.852741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.852857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.852884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.852990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.853017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.853107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.853134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.853222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.853248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.853369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.853396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.853482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.853510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.853596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.853622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.853715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.853742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.853827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.853853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.853933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.775 [2024-10-08 18:33:57.853960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.775 qpair failed and we were unable to recover it. 00:26:39.775 [2024-10-08 18:33:57.854046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.854072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.854189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.854216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.854332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.854358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.854451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.854478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.854568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.854594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.854710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.854737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.854834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.854860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.854954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.854982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.855103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.855130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.855226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.855252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.855367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.855395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.855507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.855533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.855647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.855678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.855805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.855833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.855913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.855941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.856057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.856084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.856174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.856200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.856285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.856317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.856399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.856425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.856540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.856568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.856649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.856675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.856758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.856785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.856867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.856894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.856977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.857003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.857113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.857139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.857256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.857282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.857384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.857412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.857554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.857580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.857658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.857685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.857766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.857793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.857877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.857903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.857990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.858018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.858102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.858128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.858216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.858242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.858340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.858367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.858456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.858483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.858567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.858593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.776 qpair failed and we were unable to recover it. 00:26:39.776 [2024-10-08 18:33:57.858678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.776 [2024-10-08 18:33:57.858705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.858814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.858840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.858929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.858960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.859065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.859091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.859184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.859211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.859323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.859351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.859438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.859466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.859550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.859576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.859668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.859694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.859750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.777 [2024-10-08 18:33:57.859785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.859810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.859900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.859926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.860038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.860065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.860146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.860173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.860294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.860322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.860399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.860426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.860546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.860577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.860661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.860687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.860800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.860828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.860909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.860936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.861032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.861057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.861174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.861202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.861296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.861326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.861439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.861465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.861578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.861604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.861693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.861720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.861807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.861833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.861932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.861959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.862075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.862101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.862188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.862215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.862337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.862365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.862454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.862480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.862591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.862618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.777 [2024-10-08 18:33:57.862737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.777 [2024-10-08 18:33:57.862764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.777 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.862849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.862877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.862971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.862997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.863113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.863140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.863235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.863261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.863382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.863409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.863495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.863521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.863638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.863664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.863752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.863779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.863862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.863890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.863973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.864006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.864085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.864112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.864201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.864228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.864327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.864355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.864442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.864469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.864558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.864585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.864699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.864726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.864806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.864833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.864975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.865001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.865089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.865117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.865200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.865225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.865318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.865346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.865426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.865452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.865565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.865591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.865693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.865719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.865816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.865843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.865932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.865958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.866046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.866073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.866150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.866176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.866263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.866304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.866448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.866475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.866560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.866586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.866673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.866699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.866792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.866818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.866900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.866927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.867020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.867045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.867154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.867181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.867301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.867332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.867428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.778 [2024-10-08 18:33:57.867455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.778 qpair failed and we were unable to recover it. 00:26:39.778 [2024-10-08 18:33:57.867544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.867570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.867686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.867712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.867802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.867828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.867913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.867940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.868027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.868054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.868167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.868194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.868316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.868343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.868425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.868452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.868578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.868604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.868804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.868831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.868945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.868973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.869061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.869087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.869191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.869219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.869318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.869345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.869458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.869484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.869589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.869615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.869699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.869725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.869802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.869828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.869941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.869968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.870047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.870073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.870159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.870186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.870279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.870311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.870397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.870423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.870532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.870558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.870652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.870679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.870757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.870788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.870876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.870902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.870988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.871014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.871126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.871167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.871262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.871298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.871386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.871414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.871502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.871529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.871612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.871639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.871725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.871753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.871860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.871887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.871976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.872002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.872117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.872143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.872257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.872293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.872390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.872416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.872569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.872596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.872691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.872716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.872826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.872851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.779 qpair failed and we were unable to recover it. 00:26:39.779 [2024-10-08 18:33:57.872972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.779 [2024-10-08 18:33:57.872999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.873085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.873110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.873200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.873227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.873317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.873346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.873442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.873470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.873555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.873582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.873719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.873746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.873864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.873892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.874007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.874033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.874123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.874150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.874235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.874262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.874390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.874416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.874509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.874535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.874643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.874668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.874753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.874780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.874896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.874922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.875011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.875038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.875120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.875147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.875238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.875265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.875396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.875425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.875562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.875594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.875686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.875714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.875808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.875835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.875948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.875975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.876076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.876103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.876224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.876251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.876355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.876383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.876502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.876528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.876639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.876666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.876785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.876811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.876897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.876924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.877020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.877048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.877135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.877161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.877269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.877304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.877442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.877469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.877577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.877604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.877698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.877725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.877811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.877839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.877926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.877953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.878075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.878102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.878188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.878215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.878312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.878339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.878460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.878487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.780 qpair failed and we were unable to recover it. 00:26:39.780 [2024-10-08 18:33:57.878577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.780 [2024-10-08 18:33:57.878605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.878722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.878748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.878862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.878888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.879002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.879029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.879140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.879167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.879254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.879281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.879424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.879451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.879542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.879573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.879692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.879719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.879842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.879870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.879955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.879981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.880100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.880127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.880218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.880245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.880331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.880358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.880474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.880501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.880594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.880622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.880710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.880737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.880850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.880877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.880971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.880997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.881111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.881137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.881277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.881310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.881399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.881425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.881540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.881567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.881692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.881719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.881822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.881850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.881935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.881962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.882072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.882099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.882181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.882208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.882322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.882350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.882441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.882468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.882605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.882632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.882721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.882748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.882844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.882871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.882963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.882990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.883073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.883104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.883198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.883225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.883315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.883343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.883480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.781 [2024-10-08 18:33:57.883506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.781 qpair failed and we were unable to recover it. 00:26:39.781 [2024-10-08 18:33:57.883597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.883624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.883744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.883771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.883858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.883885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.883976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.884003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.884124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.884151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.884235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.884262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.884379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.884407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.884485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.884512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.884626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.884652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.884738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.884765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.884862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.884890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.884973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.884999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.885100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.885127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.885209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.885235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.885352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.885380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.885495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.885522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.885602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.885628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.885720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.885747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.885832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.885858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.885944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.885972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.886062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.886089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.886176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.886203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.886342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.886369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.886474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.886505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.886583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.886610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.886698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.886725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.886833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.886859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.886999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.887026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.887117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.887144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.887223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.887250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.887343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.887371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.887451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.782 [2024-10-08 18:33:57.887478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.782 qpair failed and we were unable to recover it. 00:26:39.782 [2024-10-08 18:33:57.887562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.887588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.887702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.887729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.887838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.887865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.887957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.887982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.888063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.888089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.888204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.888232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.888322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.888349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.888434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.888461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.888551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.888577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.888666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.888693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.888775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.888802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.888898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.888925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.889034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.889061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.889159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.889186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.889269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.889306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.889403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.889430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.889547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.889574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.889669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.889696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.889807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.889838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.889954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.889980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.890062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.890088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.890170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.890196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.890277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.890317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.890407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.890434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.890518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.890545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.890663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.890690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.890779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.890805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.890892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.890918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.890999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.891025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.891117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.891144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.891235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.891261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.891355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.891382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.891528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.891555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.891674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.891701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.891795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.891821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.891935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.891962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.892044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.892070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.892159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.892186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.892272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.783 [2024-10-08 18:33:57.892308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.783 qpair failed and we were unable to recover it. 00:26:39.783 [2024-10-08 18:33:57.892423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.892450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.892569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.892595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.892687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.892713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.892802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.892828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.892942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.892969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.893061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.893087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.893204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.893231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.893324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.893352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.893442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.893468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.893581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.893607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.893722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.893749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.893865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.893891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.894001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.894027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.894133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.894160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.894254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.894281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.894402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.894428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.894515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.894541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.894656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.894682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.894793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.894819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.894943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.894975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.895062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.895088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.895205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.895231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.895318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.895346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.895448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.895488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.895611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.895639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.895734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.895761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.895899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.895926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.896044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.896070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.896153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.896180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.896298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.896325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.896407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.896434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.896523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.896550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.896664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.896691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.896807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.896834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.896976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.897003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.897120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.897147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.897232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.897258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.897384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.897412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.897491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.897517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.784 [2024-10-08 18:33:57.897593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.784 [2024-10-08 18:33:57.897619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.784 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.897709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.897736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.897816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.897845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.897924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.897950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.898039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.898067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.898147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.898173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.898284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.898328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.898418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.898449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.898531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.898557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.898647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.898675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.898784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.898810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.898929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.898956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.899057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.899083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.899176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.899203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.899310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.899338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.899456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.899482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.899570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.899598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.899715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.899742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.899830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.899857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.899949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.899976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.900095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.900123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.900247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.900273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.900396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.900422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.900533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.900559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.900642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.900669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.900764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.900790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.900877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.900904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.900999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.901026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.901117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.901144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.901315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.901342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.901424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.901450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.901589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.901616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.901725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.901751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.901859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.901886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.901975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.902002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.902089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.902115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.902231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.902257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.902381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.902408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.902486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.902513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.785 qpair failed and we were unable to recover it. 00:26:39.785 [2024-10-08 18:33:57.902600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.785 [2024-10-08 18:33:57.902628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.902732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.902759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.902855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.902881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.902990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.903017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.903123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.903150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.903232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.903260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.903349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.903376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.903491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.903519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.903600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.903632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.903717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.903742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.903862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.903888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.903998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.904024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.904113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.904139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.904225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.904251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.904386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.904427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.904518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.904546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.904665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.904693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.904810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.904837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.904951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.904977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.905076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.905104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.905195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.905223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.905337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.905364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.905453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.905480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.905565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.905592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.905677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.905703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.905796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.905824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.905912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.905939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.906025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.906051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.906143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.906169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.906256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.906282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.906385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.906413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.906524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.906551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.906664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.906691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.906802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.906830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.906969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.786 [2024-10-08 18:33:57.906995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.786 qpair failed and we were unable to recover it. 00:26:39.786 [2024-10-08 18:33:57.907092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.907132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.907231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.907260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.907361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.907389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.907483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.907509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.907594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.907621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.907715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.907741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.907834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.907861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.907975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.908002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.908112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.908139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.908227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.908255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.908351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.908377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.908490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.908517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.908603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.908629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.908713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.908738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.908856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.908882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.908969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.908995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.909110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.909135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.909224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.909250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.909353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.909382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.909499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.909526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.909621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.909649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.909736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.909762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.909846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.909874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.909965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.909991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.910083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.910109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.910205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.910233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.910317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.910344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.910433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.910461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.910544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.910569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.910660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.910687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.910803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.910828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.910947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.910974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.911063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.911090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.911181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.911209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.911302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.911329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.911448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.911475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.911586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.911612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.911728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.911755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.911849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.911874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.911965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.787 [2024-10-08 18:33:57.911993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.787 qpair failed and we were unable to recover it. 00:26:39.787 [2024-10-08 18:33:57.912074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.912099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.912218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.912245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.912345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.912371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.912464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.912491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.912603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.912629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.912739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.912767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.912853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.912879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.912966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.912991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.913076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.913102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.913213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.913239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.913338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.913365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.913448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.913474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.913594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.913622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.913710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.913736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.913878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.913907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.913995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.914021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.914138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.914164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.914246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.914272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.914380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.914405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.914487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.914513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.914625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.914652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.914732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.914757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.914871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.914898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.914986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.915015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.915096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.915122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.915217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.915245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.915341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.915369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.915484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.915515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.915609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.915636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.915724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.915752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.915839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.915864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.915950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.915976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.916088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.916114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.916198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.916225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.916312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.916338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.916457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.916486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.916603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.916629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.916723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.916749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.916835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.788 [2024-10-08 18:33:57.916861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.788 qpair failed and we were unable to recover it. 00:26:39.788 [2024-10-08 18:33:57.916971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.916997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.917080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.917107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.917201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.917228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.917321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.917348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.917460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.917488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.917580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.917606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.917687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.917713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.917811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.917837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.917921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.917949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.918042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.918068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.918164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.918191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.918307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.918333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.918418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.918445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.918565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.918591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.918677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.918704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.918818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.918851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.918962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.918989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.919072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.919098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.919186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.919213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.919305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.919332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.919440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.919467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.919563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.919589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.919702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.919730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.919815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.919842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.919957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.919983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.920074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.920100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.920191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.920217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.920308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.920335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.920431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.920456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.920551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.920579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.920673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.920700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.920791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.920818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.920913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.920940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.921018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.921044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.921186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.921213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.921331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.921359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.921452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.921478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.921577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.921605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.921693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.921719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.789 [2024-10-08 18:33:57.921806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.789 [2024-10-08 18:33:57.921832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.789 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.921918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.921945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.922026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.922054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.922145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.922176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.922274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.922326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.922441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.922467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.922560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.922586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.922699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.922726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.922840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.922868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.922962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.922988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.923070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.923096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.923213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.923240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.923365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.923392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.923475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.923502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.923582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.923609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.923691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.923718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.923805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.923831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.923923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.923949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.924060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.924086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.924185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.924211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.924303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.924332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.924410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.924436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.924535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.924562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.924703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.924729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.924838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.924864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.924958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.924984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.925072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.925099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.925188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.925215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.925311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.925344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.925466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.925494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.925580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.925612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.925723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.925750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.925841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.925868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.925976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.926003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.926102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.790 [2024-10-08 18:33:57.926130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.790 qpair failed and we were unable to recover it. 00:26:39.790 [2024-10-08 18:33:57.926244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.926272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.926370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.926397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.926493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.926519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.926611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.926637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.926729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.926756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.926846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.926872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.926968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.926996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.927113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.927139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.927228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.927255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.927379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.927406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.927504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.927531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.927623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.927649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.927736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.927764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.927852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.927879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.927997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.928024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.928112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.928138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.928231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.928259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.928360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.928386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.928475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.928502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.928596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.928624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.928710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.928738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.928815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.928842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.928959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.928990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.929089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.929117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.929215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.929244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.929345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.929372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.929499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.929526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.929603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.929628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.929708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.929735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.929830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.929856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.929937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.929964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.930054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.930079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.930167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.930193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.930324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.930351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.930440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.930466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.930554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.930580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.930686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.930713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.930838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.930864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.930958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.791 [2024-10-08 18:33:57.930985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.791 qpair failed and we were unable to recover it. 00:26:39.791 [2024-10-08 18:33:57.931077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.931104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.931188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.931214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.931308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.931336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.931441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.931467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.931548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.931574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.931658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.931685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.931778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.931805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.931915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.931941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.932030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.932056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.932142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.932169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.932273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.932307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.932407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.932434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.932516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.932542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.932634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.932660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.932812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.932839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.932964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.932991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.933101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.933128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.933209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.933235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.933354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.933395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.933542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.933571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.933668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.933695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.933783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.933810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.933899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.933926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.934019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.934052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.934143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.934170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.934253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.934280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.934384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.934411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.934524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.934550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.934660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.934686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.934805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.934832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.934929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.934958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.935036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.935063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.935156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.935184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.935279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.935312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.935401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.935428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.935517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.935545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.935660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.935687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.935782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.935809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.935903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.935930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.792 [2024-10-08 18:33:57.936018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.792 [2024-10-08 18:33:57.936044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.792 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.936155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.936183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.936280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.936313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.936431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.936457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.936542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.936568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.936654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.936681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.936773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.936802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.936918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.936945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.937040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.937066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.937160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.937187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.937292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.937325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.937419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.937452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.937542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.937568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.937656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.937683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.937803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.937830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.937925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.937952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.938034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.938061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.938156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.938183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.938267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.938312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.938414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.938441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.938532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.938559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.938650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.938678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.938770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.938798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.938891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.938919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.939039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.939067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.939189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.939216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.939313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.939342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.939431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.939458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.939568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.939595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.939686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.939713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.939794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.939820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.939906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.939933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.940050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.940076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.940165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.940193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.940306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.940333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.940421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.940447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.940536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.940563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.940649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.940676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.940788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.793 [2024-10-08 18:33:57.940820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.793 qpair failed and we were unable to recover it. 00:26:39.793 [2024-10-08 18:33:57.940900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.940926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.941042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.941069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.941153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.941180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.941267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.941303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.941415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.941443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.941540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.941567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.941682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.941709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.941800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.941827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.941909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.941936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.942019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.942046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.942128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.942155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.942253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.942280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.942381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.942408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.942526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.942553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.942655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.942682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.942772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.942799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.942914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.942941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.943050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.943077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.943168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.943196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.943284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.943317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.943412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.943439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.943521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.943547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.943639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.943666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.943755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.943782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.943896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.943922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.944007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.944035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.944184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.944211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.944326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.944354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.944448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.944476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.944573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.944600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.944683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.944710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.944829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.944856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.944983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.945010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.945099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.945126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.945213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.945239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.794 [2024-10-08 18:33:57.945344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.794 [2024-10-08 18:33:57.945372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.794 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.945484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.945511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.945604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.945631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.945721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.945748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.945850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.945889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.946021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.946049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.946148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.946176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.946266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.946302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.946391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.946418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.946529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.946555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.946701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.946728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.946815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.946841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.946944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.946971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.947052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.947080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.947171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.947198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.947310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.947341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.947432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.947459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.947546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.947573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.947674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.947701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.947815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.947843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.947930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.947956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.948068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.948095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.948183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.948210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.948302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.948332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.948428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.948456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.948541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.948568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.948654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.948680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.948798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.948825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.948946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.948974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.949068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.949095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.949189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.949215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.949313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.949341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.949430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.949456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.949549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.949577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.949662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.949689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.949807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.949833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.949928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.949957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.950048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.950075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.950162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.950188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.950303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.795 [2024-10-08 18:33:57.950330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.795 qpair failed and we were unable to recover it. 00:26:39.795 [2024-10-08 18:33:57.950421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.950447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.950539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.950566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.950682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.950709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.950806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.950835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.950921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.950948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.951046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.951074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.951158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.951185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.951301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.951329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.951415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.951441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.951526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.951554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.951643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.951670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.951772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.951799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.951910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.951938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.952028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.952055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.952168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.952195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.952312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.952341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.952462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.952489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.952584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.952611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.952722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.952749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.952849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.952876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.952962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.952988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.953107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.953136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.953227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.953253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.953376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.953404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.953491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.953517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.953634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.953660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.953752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.953778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.953870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.953898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.953991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.954018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.954127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.954154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.954244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.954270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.954358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.954385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.954507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.954535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.954648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.954674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.954763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.954790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.954910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.954937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.955028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.955056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.955172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.955198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.955344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.955372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.955487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.796 [2024-10-08 18:33:57.955514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.796 qpair failed and we were unable to recover it. 00:26:39.796 [2024-10-08 18:33:57.955601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.955627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.955732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.955759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.955847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.955873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.955978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.956004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.956113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.956146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.956230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.956261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.956359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.956386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.956473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.956500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.956593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.956619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.956697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.956723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.956865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.956891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.956977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.957004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.957085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.957111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.957232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.957258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.957364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.957394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.957485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.957513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.957609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.957637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.957747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.957785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.957868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.957895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.958023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.958058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.958171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.958198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.958300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.958328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.958418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.958446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.958560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.958587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.958699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.958726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.958812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.958839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.958935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.958962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.959047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.959074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.959164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.959191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.959316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.959346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.959437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.959462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.959554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.959581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.959676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.959708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.959787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.959814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.959908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.959934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.960046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.960073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.960168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.960194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.960297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.960324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.960440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.960465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.960551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.797 [2024-10-08 18:33:57.960578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.797 qpair failed and we were unable to recover it. 00:26:39.797 [2024-10-08 18:33:57.960665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.960691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.960806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.960835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.960927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.960954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.961070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.961097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.961179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.961206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.961297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.961324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.961428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.961456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.961544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.961571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.961657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.961684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.961798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.961825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.961939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.961966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.962047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.962074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.962158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.962185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.962310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.962339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.962429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.962456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.962543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.962569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.962693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.962720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.962818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.962844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.962928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.962955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.963034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.963066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.963153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.963180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.963272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.963313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.963426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.963453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.963562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.963589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.963677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.963704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.963797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.963826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.963916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.963943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.964054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.964081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.964167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.964194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.964273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.964312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.964428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.964454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.964542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.964570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.964662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.964689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.964810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.964837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.964917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.964944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.965056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.965082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.965177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.965204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.965300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.965330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.965413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.965440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.798 qpair failed and we were unable to recover it. 00:26:39.798 [2024-10-08 18:33:57.965554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.798 [2024-10-08 18:33:57.965582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.965667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.965694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.965787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.965814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.965926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.965953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.966048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.966075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.966154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.966181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.966335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.966363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.966445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.966481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.966562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.966590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.966682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.966709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.966788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.966815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.966905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.966932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.967010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.967037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.967121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.967147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.967235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.967262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.967392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.967419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.967501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.967528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.967672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.967699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.967816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.967844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.967935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.967962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.968049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.968077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.968204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.968232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.968337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.968366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.968449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.968476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.968559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.968587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.968672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.968699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.968784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.968811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.968895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.968922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.968998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.969025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.969161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.969188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.969265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.969306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.969397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.969423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.969507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.969534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.969670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.969697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.969784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.969817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.969905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.799 [2024-10-08 18:33:57.969932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.799 qpair failed and we were unable to recover it. 00:26:39.799 [2024-10-08 18:33:57.970024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.970051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.970157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.970196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.970323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.970353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.970471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.970498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.970628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.970656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.970764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.970790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.970876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.970902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.970980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.971005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.971100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.971128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.971250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.971276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.971401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.971428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.971508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.971534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.971647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.971674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.971787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.971816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.971900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.971929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.972024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.972052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.972138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.972165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.972248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.972275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.972380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.972407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.972495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.972522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.972617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.972644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.972729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.972755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.972835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.972864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.972959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.972985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.973067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.973095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.973184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.973215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.973354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.973382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.973478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.973504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.973600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.973627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.973718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.973744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.973834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.973861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.973944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.973972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.974066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.974093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.974182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.974210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.974314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.974342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.974459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.974486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.974566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.974604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.974698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.974725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.974810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.974837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.974927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.800 [2024-10-08 18:33:57.974955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.800 qpair failed and we were unable to recover it. 00:26:39.800 [2024-10-08 18:33:57.975036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.975064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.975139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.975165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.975259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.975294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.975380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.975406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.975502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.975528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.975654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.975681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.975769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.975796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.975914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.975941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.976032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.976059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.976150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.976177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.976265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.976308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.976415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.976442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.976531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.976559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.976684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.976710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.976826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.976853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.976944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.976970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.977081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.977107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.977204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.977230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.977330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.977356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.977447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.977473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.977578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.977605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.977700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.977728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.977812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.977839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.977923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.977951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.978034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.978060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.978145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.978173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.978274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.978308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.978401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.978429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.978510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.978536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.978636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.978663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.978755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.978783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.978876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.978903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.978995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.979023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.979113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.979141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.979225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.979252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.979385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.979413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.979495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.979522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.979615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.979642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.979764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.801 [2024-10-08 18:33:57.979791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ff7060 with addr=10.0.0.2, port=4420 00:26:39.801 [2024-10-08 18:33:57.979783] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.801 qpair failed and we were unable to recover it. 00:26:39.801 [2024-10-08 18:33:57.979822] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.801 [2024-10-08 18:33:57.979838] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.802 [2024-10-08 18:33:57.979850] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.802 [2024-10-08 18:33:57.979861] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.802 [2024-10-08 18:33:57.979882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.979909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.979999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.980024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.980141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.980170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.980262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.980297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.980375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.980402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.980487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.980515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.980638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.980664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.980750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.980778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.980858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.980885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.980978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.981003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.981118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.981145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.981260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.981323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.981443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.981469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.981558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.981584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.981610] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:26:39.802 [2024-10-08 18:33:57.981672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.981697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.981811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.981838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.981826] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:26:39.802 [2024-10-08 18:33:57.981873] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:26:39.802 [2024-10-08 18:33:57.981925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.981877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:26:39.802 [2024-10-08 18:33:57.981952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.982042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.982066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.982159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.982186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.982276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.982311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.982403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.982429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.982512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.982538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.982640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.982667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.982756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.982783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.982876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.982904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.983003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.983029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.983106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.983133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.983219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.983247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.983371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.983398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.983480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.983506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.983610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.983637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.983734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.983760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.983841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.983867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.983950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.983975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.984069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.984096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.984187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.984214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.802 [2024-10-08 18:33:57.984314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.802 [2024-10-08 18:33:57.984341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.802 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.984434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.984460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.984548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.984574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.984664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.984689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.984781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.984808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.984900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.984926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.985018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.985045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.985144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.985170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.985298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.985325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.985406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.985432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.985521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.985548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.985644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.985669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.985758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.985786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.985883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.985909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.986021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.986053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.986141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.986167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.986250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.986275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.986392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.986418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.986512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.986539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.986627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.986653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.986740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.986766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.986861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.986887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.986993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.987019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.987109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.987135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.987222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.987249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.987350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.987376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.803 [2024-10-08 18:33:57.987464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.803 [2024-10-08 18:33:57.987490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.803 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.987610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.987636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.987720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.987747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.987840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.987867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.987963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.987989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.988072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.988098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.988180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.988205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.988309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.988337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.988418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.988444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.988526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.988552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.988647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.988673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.988760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.988786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.988869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.988895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.988978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.989003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.989086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.989113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.989237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.989262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 A controller has encountered a failure and is being reset. 00:26:39.804 [2024-10-08 18:33:57.989374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.989402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.989483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.989508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.989601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.989628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.989747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.989774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.989858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.989884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.989970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.989997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.990084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.990110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.990196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.990222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.990325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.990352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.990441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.990467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.990560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.990587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.990707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.990733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8174000b90 with addr=10.0.0.2, port=4420 00:26:39.804 qpair failed and we were unable to recover it. 00:26:39.804 [2024-10-08 18:33:57.990852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:39.804 [2024-10-08 18:33:57.990900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2004ff0 with addr=10.0.0.2, port=4420 00:26:39.804 [2024-10-08 18:33:57.990923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2004ff0 is same with the state(6) to be set 00:26:39.804 [2024-10-08 18:33:57.990948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2004ff0 (9): Bad file descriptor 00:26:39.804 [2024-10-08 18:33:57.990976] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:39.804 [2024-10-08 18:33:57.990990] nvme_ctrlr.c:1842:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:39.804 [2024-10-08 18:33:57.991006] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:39.804 Unable to reset the controller. 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.064 Malloc0 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.064 [2024-10-08 18:33:58.178392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.064 [2024-10-08 18:33:58.206654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.064 18:33:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1361460 00:26:41.000 Controller properly reset. 00:26:46.254 Initializing NVMe Controllers 00:26:46.254 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:46.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:46.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:46.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:46.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:46.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:46.254 Initialization complete. Launching workers. 00:26:46.254 Starting thread on core 1 00:26:46.254 Starting thread on core 2 00:26:46.254 Starting thread on core 3 00:26:46.254 Starting thread on core 0 00:26:46.254 18:34:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:46.254 00:26:46.254 real 0m10.698s 00:26:46.254 user 0m34.172s 00:26:46.254 sys 0m7.103s 00:26:46.254 18:34:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:46.255 18:34:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.255 ************************************ 00:26:46.255 END TEST nvmf_target_disconnect_tc2 00:26:46.255 ************************************ 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:46.255 rmmod nvme_tcp 00:26:46.255 rmmod nvme_fabrics 00:26:46.255 rmmod nvme_keyring 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1361983 ']' 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1361983 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1361983 ']' 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1361983 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1361983 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1361983' 00:26:46.255 killing process with pid 1361983 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1361983 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1361983 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.255 18:34:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.787 18:34:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:48.787 00:26:48.787 real 0m15.890s 00:26:48.787 user 0m59.988s 00:26:48.787 sys 0m9.601s 00:26:48.787 18:34:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:48.787 18:34:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:48.787 ************************************ 00:26:48.787 END TEST nvmf_target_disconnect 00:26:48.787 ************************************ 00:26:48.787 18:34:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:48.787 00:26:48.787 real 5m9.647s 00:26:48.787 user 11m10.360s 00:26:48.787 sys 1m16.356s 00:26:48.787 18:34:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:48.787 18:34:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.787 ************************************ 00:26:48.787 END TEST nvmf_host 00:26:48.787 ************************************ 00:26:48.787 18:34:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:26:48.787 18:34:06 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:26:48.787 18:34:06 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:48.787 18:34:06 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:48.787 18:34:06 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.787 18:34:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:48.787 ************************************ 00:26:48.787 START TEST nvmf_target_core_interrupt_mode 00:26:48.787 ************************************ 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:26:48.787 * Looking for test storage... 00:26:48.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lcov --version 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:48.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.787 --rc genhtml_branch_coverage=1 00:26:48.787 --rc genhtml_function_coverage=1 00:26:48.787 --rc genhtml_legend=1 00:26:48.787 --rc geninfo_all_blocks=1 00:26:48.787 --rc geninfo_unexecuted_blocks=1 00:26:48.787 00:26:48.787 ' 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:48.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.787 --rc genhtml_branch_coverage=1 00:26:48.787 --rc genhtml_function_coverage=1 00:26:48.787 --rc genhtml_legend=1 00:26:48.787 --rc geninfo_all_blocks=1 00:26:48.787 --rc geninfo_unexecuted_blocks=1 00:26:48.787 00:26:48.787 ' 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:48.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.787 --rc genhtml_branch_coverage=1 00:26:48.787 --rc genhtml_function_coverage=1 00:26:48.787 --rc genhtml_legend=1 00:26:48.787 --rc geninfo_all_blocks=1 00:26:48.787 --rc geninfo_unexecuted_blocks=1 00:26:48.787 00:26:48.787 ' 00:26:48.787 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:48.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.787 --rc genhtml_branch_coverage=1 00:26:48.787 --rc genhtml_function_coverage=1 00:26:48.787 --rc genhtml_legend=1 00:26:48.787 --rc geninfo_all_blocks=1 00:26:48.787 --rc geninfo_unexecuted_blocks=1 00:26:48.787 00:26:48.787 ' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:48.788 ************************************ 00:26:48.788 START TEST nvmf_abort 00:26:48.788 ************************************ 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:26:48.788 * Looking for test storage... 00:26:48.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:48.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.788 --rc genhtml_branch_coverage=1 00:26:48.788 --rc genhtml_function_coverage=1 00:26:48.788 --rc genhtml_legend=1 00:26:48.788 --rc geninfo_all_blocks=1 00:26:48.788 --rc geninfo_unexecuted_blocks=1 00:26:48.788 00:26:48.788 ' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:48.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.788 --rc genhtml_branch_coverage=1 00:26:48.788 --rc genhtml_function_coverage=1 00:26:48.788 --rc genhtml_legend=1 00:26:48.788 --rc geninfo_all_blocks=1 00:26:48.788 --rc geninfo_unexecuted_blocks=1 00:26:48.788 00:26:48.788 ' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:48.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.788 --rc genhtml_branch_coverage=1 00:26:48.788 --rc genhtml_function_coverage=1 00:26:48.788 --rc genhtml_legend=1 00:26:48.788 --rc geninfo_all_blocks=1 00:26:48.788 --rc geninfo_unexecuted_blocks=1 00:26:48.788 00:26:48.788 ' 00:26:48.788 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:48.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.788 --rc genhtml_branch_coverage=1 00:26:48.789 --rc genhtml_function_coverage=1 00:26:48.789 --rc genhtml_legend=1 00:26:48.789 --rc geninfo_all_blocks=1 00:26:48.789 --rc geninfo_unexecuted_blocks=1 00:26:48.789 00:26:48.789 ' 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.789 18:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:26:51.316 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:51.317 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:51.317 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:51.317 Found net devices under 0000:09:00.0: cvl_0_0 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:51.317 Found net devices under 0000:09:00.1: cvl_0_1 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:51.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:26:51.317 00:26:51.317 --- 10.0.0.2 ping statistics --- 00:26:51.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.317 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:26:51.317 00:26:51.317 --- 10.0.0.1 ping statistics --- 00:26:51.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.317 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:51.317 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1364792 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1364792 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1364792 ']' 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:51.318 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.318 [2024-10-08 18:34:09.318171] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:51.318 [2024-10-08 18:34:09.319266] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:26:51.318 [2024-10-08 18:34:09.319358] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.318 [2024-10-08 18:34:09.381873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:51.318 [2024-10-08 18:34:09.488862] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.318 [2024-10-08 18:34:09.488919] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.318 [2024-10-08 18:34:09.488932] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.318 [2024-10-08 18:34:09.488943] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.318 [2024-10-08 18:34:09.488953] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.318 [2024-10-08 18:34:09.489891] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.318 [2024-10-08 18:34:09.492307] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:51.318 [2024-10-08 18:34:09.492314] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.576 [2024-10-08 18:34:09.605727] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:51.576 [2024-10-08 18:34:09.605943] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:51.576 [2024-10-08 18:34:09.605971] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:51.576 [2024-10-08 18:34:09.606237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.576 [2024-10-08 18:34:09.661002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.576 Malloc0 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.576 Delay0 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.576 [2024-10-08 18:34:09.725182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.576 18:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:51.576 [2024-10-08 18:34:09.827158] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:54.102 Initializing NVMe Controllers 00:26:54.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:54.102 controller IO queue size 128 less than required 00:26:54.102 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:54.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:54.102 Initialization complete. Launching workers. 00:26:54.102 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28211 00:26:54.102 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28268, failed to submit 66 00:26:54.102 success 28211, unsuccessful 57, failed 0 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:54.102 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.102 rmmod nvme_tcp 00:26:54.103 rmmod nvme_fabrics 00:26:54.103 rmmod nvme_keyring 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1364792 ']' 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1364792 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1364792 ']' 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1364792 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1364792 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1364792' 00:26:54.103 killing process with pid 1364792 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1364792 00:26:54.103 18:34:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1364792 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.103 18:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.633 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.633 00:26:56.633 real 0m7.579s 00:26:56.633 user 0m9.354s 00:26:56.633 sys 0m2.951s 00:26:56.633 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:56.633 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:56.633 ************************************ 00:26:56.633 END TEST nvmf_abort 00:26:56.633 ************************************ 00:26:56.633 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:56.633 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:56.633 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:56.633 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:56.633 ************************************ 00:26:56.634 START TEST nvmf_ns_hotplug_stress 00:26:56.634 ************************************ 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:56.634 * Looking for test storage... 00:26:56.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:56.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.634 --rc genhtml_branch_coverage=1 00:26:56.634 --rc genhtml_function_coverage=1 00:26:56.634 --rc genhtml_legend=1 00:26:56.634 --rc geninfo_all_blocks=1 00:26:56.634 --rc geninfo_unexecuted_blocks=1 00:26:56.634 00:26:56.634 ' 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:56.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.634 --rc genhtml_branch_coverage=1 00:26:56.634 --rc genhtml_function_coverage=1 00:26:56.634 --rc genhtml_legend=1 00:26:56.634 --rc geninfo_all_blocks=1 00:26:56.634 --rc geninfo_unexecuted_blocks=1 00:26:56.634 00:26:56.634 ' 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:56.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.634 --rc genhtml_branch_coverage=1 00:26:56.634 --rc genhtml_function_coverage=1 00:26:56.634 --rc genhtml_legend=1 00:26:56.634 --rc geninfo_all_blocks=1 00:26:56.634 --rc geninfo_unexecuted_blocks=1 00:26:56.634 00:26:56.634 ' 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:56.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.634 --rc genhtml_branch_coverage=1 00:26:56.634 --rc genhtml_function_coverage=1 00:26:56.634 --rc genhtml_legend=1 00:26:56.634 --rc geninfo_all_blocks=1 00:26:56.634 --rc geninfo_unexecuted_blocks=1 00:26:56.634 00:26:56.634 ' 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.634 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.635 18:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:58.534 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:58.534 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.534 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:58.535 Found net devices under 0000:09:00.0: cvl_0_0 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:58.535 Found net devices under 0000:09:00.1: cvl_0_1 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:58.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:26:58.535 00:26:58.535 --- 10.0.0.2 ping statistics --- 00:26:58.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.535 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:26:58.535 00:26:58.535 --- 10.0.0.1 ping statistics --- 00:26:58.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.535 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1367022 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1367022 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1367022 ']' 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.535 18:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:58.535 [2024-10-08 18:34:16.804747] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:58.535 [2024-10-08 18:34:16.805857] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:26:58.535 [2024-10-08 18:34:16.805908] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.793 [2024-10-08 18:34:16.868270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:58.793 [2024-10-08 18:34:16.975885] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.793 [2024-10-08 18:34:16.975942] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.793 [2024-10-08 18:34:16.975971] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.793 [2024-10-08 18:34:16.975983] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.793 [2024-10-08 18:34:16.975992] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.793 [2024-10-08 18:34:16.976955] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.793 [2024-10-08 18:34:16.977004] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.793 [2024-10-08 18:34:16.977008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.051 [2024-10-08 18:34:17.086736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:59.051 [2024-10-08 18:34:17.086946] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:59.051 [2024-10-08 18:34:17.086959] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:59.051 [2024-10-08 18:34:17.087205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:59.051 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.051 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:26:59.051 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:59.051 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.051 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:59.051 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.051 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:59.051 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:59.308 [2024-10-08 18:34:17.389711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.308 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:59.567 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.824 [2024-10-08 18:34:17.946137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.824 18:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:00.081 18:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:00.338 Malloc0 00:27:00.338 18:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:00.596 Delay0 00:27:00.596 18:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:00.853 18:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:01.109 NULL1 00:27:01.110 18:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:01.366 18:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1367433 00:27:01.366 18:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:01.366 18:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:01.366 18:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:01.623 18:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:01.880 18:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:01.880 18:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:02.137 true 00:27:02.394 18:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:02.394 18:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:02.651 18:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:02.908 18:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:02.908 18:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:03.165 true 00:27:03.165 18:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:03.165 18:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:03.422 18:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:03.679 18:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:03.679 18:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:03.936 true 00:27:03.936 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:03.936 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:04.867 Read completed with error (sct=0, sc=11) 00:27:04.867 18:34:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:04.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:05.125 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:05.125 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:05.382 true 00:27:05.382 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:05.382 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:05.639 18:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:05.896 18:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:05.896 18:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:06.153 true 00:27:06.153 18:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:06.153 18:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:06.410 18:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:06.666 18:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:06.667 18:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:06.923 true 00:27:06.924 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:06.924 18:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:07.854 18:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.111 18:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:08.111 18:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:08.369 true 00:27:08.369 18:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:08.369 18:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:08.626 18:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:08.883 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:08.883 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:09.140 true 00:27:09.140 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:09.140 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:09.397 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:09.654 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:09.911 18:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:10.171 true 00:27:10.171 18:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:10.171 18:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:11.102 18:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:11.102 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:11.358 18:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:11.358 18:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:11.615 true 00:27:11.615 18:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:11.615 18:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:11.872 18:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:12.129 18:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:12.129 18:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:12.385 true 00:27:12.385 18:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:12.385 18:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:13.317 18:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:13.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:13.317 18:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:13.317 18:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:13.574 true 00:27:13.831 18:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:13.831 18:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:14.088 18:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:14.345 18:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:27:14.345 18:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:27:14.602 true 00:27:14.602 18:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:14.602 18:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:15.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:15.533 18:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:15.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:15.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:15.791 18:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:27:15.791 18:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:27:16.048 true 00:27:16.048 18:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:16.048 18:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:16.305 18:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:16.562 18:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:27:16.562 18:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:27:16.820 true 00:27:16.820 18:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:16.820 18:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:17.411 18:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:17.704 18:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:27:17.704 18:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:27:17.979 true 00:27:17.980 18:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:17.980 18:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:18.236 18:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:18.493 18:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:27:18.494 18:34:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:27:18.751 true 00:27:19.008 18:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:19.008 18:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:19.265 18:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:19.523 18:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:27:19.523 18:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:27:19.780 true 00:27:19.780 18:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:19.780 18:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:20.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:20.712 18:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:20.712 18:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:27:20.712 18:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:27:20.993 true 00:27:20.993 18:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:20.993 18:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:21.251 18:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:21.816 18:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:27:21.816 18:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:27:21.816 true 00:27:21.816 18:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:21.816 18:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:22.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.748 18:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:22.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:22.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:23.006 18:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:27:23.006 18:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:27:23.263 true 00:27:23.263 18:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:23.263 18:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:23.520 18:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:23.777 18:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:27:23.777 18:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:27:24.034 true 00:27:24.034 18:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:24.034 18:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:24.966 18:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:24.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:24.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:25.223 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:25.223 18:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:27:25.223 18:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:27:25.480 true 00:27:25.480 18:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:25.480 18:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:25.737 18:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:25.994 18:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:27:25.994 18:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:27:26.251 true 00:27:26.251 18:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:26.251 18:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:27.183 18:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:27.440 18:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:27:27.440 18:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:27:27.697 true 00:27:27.697 18:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:27.697 18:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:27.953 18:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:28.210 18:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:27:28.210 18:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:27:28.467 true 00:27:28.467 18:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:28.467 18:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:28.724 18:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:28.981 18:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:27:28.981 18:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:27:29.238 true 00:27:29.238 18:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:29.238 18:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.169 18:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:30.427 18:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:27:30.427 18:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:27:30.684 true 00:27:30.684 18:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:30.684 18:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:30.941 18:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:31.198 18:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:27:31.198 18:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:27:31.530 true 00:27:31.530 18:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:31.530 18:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:31.807 18:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:31.807 Initializing NVMe Controllers 00:27:31.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:31.807 Controller IO queue size 128, less than required. 00:27:31.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:31.807 Controller IO queue size 128, less than required. 00:27:31.807 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:31.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:31.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:31.807 Initialization complete. Launching workers. 00:27:31.807 ======================================================== 00:27:31.807 Latency(us) 00:27:31.807 Device Information : IOPS MiB/s Average min max 00:27:31.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 560.80 0.27 93899.14 3045.17 1014480.02 00:27:31.807 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8544.88 4.17 14936.67 2886.46 451650.53 00:27:31.807 ======================================================== 00:27:31.807 Total : 9105.67 4.45 19799.77 2886.46 1014480.02 00:27:31.807 00:27:32.064 18:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:27:32.064 18:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:27:32.321 true 00:27:32.321 18:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1367433 00:27:32.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1367433) - No such process 00:27:32.321 18:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1367433 00:27:32.321 18:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:32.578 18:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:32.835 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:27:32.835 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:27:32.835 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:27:32.835 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:32.835 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:27:33.093 null0 00:27:33.093 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:33.093 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:33.093 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:27:33.350 null1 00:27:33.607 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:33.607 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:33.608 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:27:33.865 null2 00:27:33.865 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:33.865 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:33.865 18:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:27:34.122 null3 00:27:34.122 18:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:34.122 18:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:34.122 18:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:27:34.380 null4 00:27:34.380 18:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:34.380 18:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:34.380 18:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:27:34.637 null5 00:27:34.637 18:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:34.637 18:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:34.637 18:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:27:34.894 null6 00:27:34.894 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:34.894 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:34.894 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:27:35.153 null7 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:35.153 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1371453 1371454 1371456 1371458 1371460 1371462 1371464 1371466 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.154 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:35.412 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:35.412 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:35.412 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:35.412 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:35.412 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:35.412 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:35.412 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:35.412 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.669 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.670 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:35.670 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.670 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.670 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:35.670 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.670 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.670 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:35.670 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:35.670 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:35.670 18:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:35.928 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:35.928 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:35.928 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:36.185 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:36.185 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:36.185 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:36.185 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:36.185 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.443 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:36.701 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:36.701 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:36.701 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:36.701 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:36.701 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:36.701 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:36.701 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:36.701 18:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:36.959 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:37.217 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:37.217 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:37.217 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:37.217 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:37.217 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:37.217 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:37.217 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:37.217 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:37.475 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:37.732 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:37.732 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:37.732 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:37.732 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:37.733 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:37.733 18:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:37.733 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:37.733 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.296 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.296 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.296 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:38.296 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.296 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.296 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:38.296 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.296 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.296 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.297 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:38.553 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:38.553 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:38.553 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:38.553 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:38.554 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:38.554 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:38.554 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:38.554 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:38.810 18:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:39.068 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:39.068 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:39.068 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:39.068 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:39.068 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:39.068 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.068 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:39.068 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.325 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:39.583 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:39.583 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:39.583 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:39.583 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:39.583 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:39.583 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:39.583 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:39.583 18:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.840 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:39.841 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:40.099 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:40.099 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:40.099 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.356 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:40.356 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:40.356 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:40.356 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:40.356 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:40.613 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:27:40.871 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:27:40.871 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:27:40.871 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:27:40.871 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:40.871 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:27:40.871 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:27:40.871 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:27:40.871 18:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.128 rmmod nvme_tcp 00:27:41.128 rmmod nvme_fabrics 00:27:41.128 rmmod nvme_keyring 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1367022 ']' 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1367022 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1367022 ']' 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1367022 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1367022 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1367022' 00:27:41.128 killing process with pid 1367022 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1367022 00:27:41.128 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1367022 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.695 18:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:43.598 00:27:43.598 real 0m47.347s 00:27:43.598 user 3m18.978s 00:27:43.598 sys 0m21.648s 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:43.598 ************************************ 00:27:43.598 END TEST nvmf_ns_hotplug_stress 00:27:43.598 ************************************ 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:43.598 ************************************ 00:27:43.598 START TEST nvmf_delete_subsystem 00:27:43.598 ************************************ 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:27:43.598 * Looking for test storage... 00:27:43.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:27:43.598 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:27:43.856 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:43.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.857 --rc genhtml_branch_coverage=1 00:27:43.857 --rc genhtml_function_coverage=1 00:27:43.857 --rc genhtml_legend=1 00:27:43.857 --rc geninfo_all_blocks=1 00:27:43.857 --rc geninfo_unexecuted_blocks=1 00:27:43.857 00:27:43.857 ' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:43.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.857 --rc genhtml_branch_coverage=1 00:27:43.857 --rc genhtml_function_coverage=1 00:27:43.857 --rc genhtml_legend=1 00:27:43.857 --rc geninfo_all_blocks=1 00:27:43.857 --rc geninfo_unexecuted_blocks=1 00:27:43.857 00:27:43.857 ' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:43.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.857 --rc genhtml_branch_coverage=1 00:27:43.857 --rc genhtml_function_coverage=1 00:27:43.857 --rc genhtml_legend=1 00:27:43.857 --rc geninfo_all_blocks=1 00:27:43.857 --rc geninfo_unexecuted_blocks=1 00:27:43.857 00:27:43.857 ' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:43.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:43.857 --rc genhtml_branch_coverage=1 00:27:43.857 --rc genhtml_function_coverage=1 00:27:43.857 --rc genhtml_legend=1 00:27:43.857 --rc geninfo_all_blocks=1 00:27:43.857 --rc geninfo_unexecuted_blocks=1 00:27:43.857 00:27:43.857 ' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:43.857 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.858 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:43.858 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:43.858 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:27:43.858 18:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.755 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:46.013 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:46.013 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:46.013 Found net devices under 0000:09:00.0: cvl_0_0 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:46.013 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:46.014 Found net devices under 0000:09:00.1: cvl_0_1 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:46.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:27:46.014 00:27:46.014 --- 10.0.0.2 ping statistics --- 00:27:46.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.014 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:27:46.014 00:27:46.014 --- 10.0.0.1 ping statistics --- 00:27:46.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.014 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1374265 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1374265 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1374265 ']' 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:46.014 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:46.014 [2024-10-08 18:35:04.256729] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:46.014 [2024-10-08 18:35:04.257935] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:27:46.014 [2024-10-08 18:35:04.258004] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.272 [2024-10-08 18:35:04.327496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:46.272 [2024-10-08 18:35:04.441528] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.272 [2024-10-08 18:35:04.441622] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.272 [2024-10-08 18:35:04.441637] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.272 [2024-10-08 18:35:04.441648] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.272 [2024-10-08 18:35:04.441658] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.272 [2024-10-08 18:35:04.442462] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.272 [2024-10-08 18:35:04.442468] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.272 [2024-10-08 18:35:04.542149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:46.272 [2024-10-08 18:35:04.542191] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:46.272 [2024-10-08 18:35:04.542435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:46.530 [2024-10-08 18:35:04.591157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:46.530 [2024-10-08 18:35:04.615425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:46.530 NULL1 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:46.530 Delay0 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1374366 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:27:46.530 18:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:46.530 [2024-10-08 18:35:04.688823] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:48.427 18:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.427 18:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.427 18:35:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 starting I/O failed: -6 00:27:48.685 starting I/O failed: -6 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 Write completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 Read completed with error (sct=0, sc=8) 00:27:48.685 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 [2024-10-08 18:35:06.810259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff6c0000c10 is same with the state(6) to be set 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 starting I/O failed: -6 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 [2024-10-08 18:35:06.810792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48570 is same with the state(6) to be set 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:48.686 Write completed with error (sct=0, sc=8) 00:27:48.686 Read completed with error (sct=0, sc=8) 00:27:49.620 [2024-10-08 18:35:07.784876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb49a70 is same with the state(6) to be set 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 [2024-10-08 18:35:07.810304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48750 is same with the state(6) to be set 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 [2024-10-08 18:35:07.810474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48390 is same with the state(6) to be set 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 [2024-10-08 18:35:07.811764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff6c000cff0 is same with the state(6) to be set 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Write completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.620 Read completed with error (sct=0, sc=8) 00:27:49.621 Write completed with error (sct=0, sc=8) 00:27:49.621 Read completed with error (sct=0, sc=8) 00:27:49.621 Write completed with error (sct=0, sc=8) 00:27:49.621 Read completed with error (sct=0, sc=8) 00:27:49.621 [2024-10-08 18:35:07.813316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff6c000d790 is same with the state(6) to be set 00:27:49.621 Initializing NVMe Controllers 00:27:49.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:49.621 Controller IO queue size 128, less than required. 00:27:49.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:49.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:49.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:49.621 Initialization complete. Launching workers. 00:27:49.621 ======================================================== 00:27:49.621 Latency(us) 00:27:49.621 Device Information : IOPS MiB/s Average min max 00:27:49.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.24 0.08 927299.69 389.21 1012704.61 00:27:49.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 184.53 0.09 905695.24 494.76 1011974.07 00:27:49.621 ======================================================== 00:27:49.621 Total : 341.77 0.17 915635.17 389.21 1012704.61 00:27:49.621 00:27:49.621 [2024-10-08 18:35:07.814190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb49a70 (9): Bad file descriptor 00:27:49.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:27:49.621 18:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.621 18:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:27:49.621 18:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1374366 00:27:49.621 18:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1374366 00:27:50.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1374366) - No such process 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1374366 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1374366 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1374366 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:50.186 [2024-10-08 18:35:08.335400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1374763 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1374763 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:50.186 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:27:50.186 [2024-10-08 18:35:08.392549] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:27:50.750 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:50.750 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1374763 00:27:50.750 18:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:51.315 18:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:51.315 18:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1374763 00:27:51.315 18:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:51.572 18:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:51.572 18:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1374763 00:27:51.572 18:35:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:52.136 18:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:52.136 18:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1374763 00:27:52.136 18:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:52.700 18:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:52.700 18:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1374763 00:27:52.700 18:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:53.265 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:53.265 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1374763 00:27:53.265 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:53.265 Initializing NVMe Controllers 00:27:53.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.265 Controller IO queue size 128, less than required. 00:27:53.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:53.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:53.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:53.265 Initialization complete. Launching workers. 00:27:53.265 ======================================================== 00:27:53.265 Latency(us) 00:27:53.265 Device Information : IOPS MiB/s Average min max 00:27:53.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005334.43 1000241.43 1042626.60 00:27:53.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004855.28 1000206.60 1011531.24 00:27:53.265 ======================================================== 00:27:53.265 Total : 256.00 0.12 1005094.85 1000206.60 1042626.60 00:27:53.265 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1374763 00:27:53.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1374763) - No such process 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1374763 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:53.830 rmmod nvme_tcp 00:27:53.830 rmmod nvme_fabrics 00:27:53.830 rmmod nvme_keyring 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1374265 ']' 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1374265 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1374265 ']' 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1374265 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1374265 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1374265' 00:27:53.830 killing process with pid 1374265 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1374265 00:27:53.830 18:35:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1374265 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:54.088 18:35:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.996 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:55.996 00:27:55.996 real 0m12.484s 00:27:55.996 user 0m24.716s 00:27:55.996 sys 0m3.701s 00:27:55.996 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:55.996 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:55.996 ************************************ 00:27:55.996 END TEST nvmf_delete_subsystem 00:27:55.996 ************************************ 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:56.255 ************************************ 00:27:56.255 START TEST nvmf_host_management 00:27:56.255 ************************************ 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:56.255 * Looking for test storage... 00:27:56.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:56.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.255 --rc genhtml_branch_coverage=1 00:27:56.255 --rc genhtml_function_coverage=1 00:27:56.255 --rc genhtml_legend=1 00:27:56.255 --rc geninfo_all_blocks=1 00:27:56.255 --rc geninfo_unexecuted_blocks=1 00:27:56.255 00:27:56.255 ' 00:27:56.255 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:56.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.255 --rc genhtml_branch_coverage=1 00:27:56.255 --rc genhtml_function_coverage=1 00:27:56.255 --rc genhtml_legend=1 00:27:56.255 --rc geninfo_all_blocks=1 00:27:56.255 --rc geninfo_unexecuted_blocks=1 00:27:56.255 00:27:56.255 ' 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:56.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.256 --rc genhtml_branch_coverage=1 00:27:56.256 --rc genhtml_function_coverage=1 00:27:56.256 --rc genhtml_legend=1 00:27:56.256 --rc geninfo_all_blocks=1 00:27:56.256 --rc geninfo_unexecuted_blocks=1 00:27:56.256 00:27:56.256 ' 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:56.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.256 --rc genhtml_branch_coverage=1 00:27:56.256 --rc genhtml_function_coverage=1 00:27:56.256 --rc genhtml_legend=1 00:27:56.256 --rc geninfo_all_blocks=1 00:27:56.256 --rc geninfo_unexecuted_blocks=1 00:27:56.256 00:27:56.256 ' 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.256 18:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.787 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:58.788 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:58.788 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:58.788 Found net devices under 0000:09:00.0: cvl_0_0 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:58.788 Found net devices under 0000:09:00.1: cvl_0_1 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:58.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:27:58.788 00:27:58.788 --- 10.0.0.2 ping statistics --- 00:27:58.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.788 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:27:58.788 00:27:58.788 --- 10.0.0.1 ping statistics --- 00:27:58.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.788 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1377104 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1377104 00:27:58.788 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1377104 ']' 00:27:58.789 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.789 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:58.789 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.789 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:58.789 18:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:58.789 [2024-10-08 18:35:16.759132] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:58.789 [2024-10-08 18:35:16.760262] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:27:58.789 [2024-10-08 18:35:16.760356] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.789 [2024-10-08 18:35:16.830170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.789 [2024-10-08 18:35:16.939720] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.789 [2024-10-08 18:35:16.939767] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.789 [2024-10-08 18:35:16.939790] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.789 [2024-10-08 18:35:16.939801] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.789 [2024-10-08 18:35:16.939825] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.789 [2024-10-08 18:35:16.941592] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.789 [2024-10-08 18:35:16.941682] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.789 [2024-10-08 18:35:16.941752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.789 [2024-10-08 18:35:16.941749] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:27:58.789 [2024-10-08 18:35:17.040213] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:58.789 [2024-10-08 18:35:17.040449] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:58.789 [2024-10-08 18:35:17.040702] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:58.789 [2024-10-08 18:35:17.041310] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:58.789 [2024-10-08 18:35:17.041563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:58.789 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.789 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:27:58.789 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:58.789 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:58.789 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 [2024-10-08 18:35:17.094448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 Malloc0 00:27:59.047 [2024-10-08 18:35:17.162677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1377269 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1377269 /var/tmp/bdevperf.sock 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1377269 ']' 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.047 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:59.048 { 00:27:59.048 "params": { 00:27:59.048 "name": "Nvme$subsystem", 00:27:59.048 "trtype": "$TEST_TRANSPORT", 00:27:59.048 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:59.048 "adrfam": "ipv4", 00:27:59.048 "trsvcid": "$NVMF_PORT", 00:27:59.048 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:59.048 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:59.048 "hdgst": ${hdgst:-false}, 00:27:59.048 "ddgst": ${ddgst:-false} 00:27:59.048 }, 00:27:59.048 "method": "bdev_nvme_attach_controller" 00:27:59.048 } 00:27:59.048 EOF 00:27:59.048 )") 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:27:59.048 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:59.048 "params": { 00:27:59.048 "name": "Nvme0", 00:27:59.048 "trtype": "tcp", 00:27:59.048 "traddr": "10.0.0.2", 00:27:59.048 "adrfam": "ipv4", 00:27:59.048 "trsvcid": "4420", 00:27:59.048 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:59.048 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:59.048 "hdgst": false, 00:27:59.048 "ddgst": false 00:27:59.048 }, 00:27:59.048 "method": "bdev_nvme_attach_controller" 00:27:59.048 }' 00:27:59.048 [2024-10-08 18:35:17.246182] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:27:59.048 [2024-10-08 18:35:17.246270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377269 ] 00:27:59.048 [2024-10-08 18:35:17.305355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.306 [2024-10-08 18:35:17.418316] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.563 Running I/O for 10 seconds... 00:27:59.563 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.563 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:27:59.563 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:59.563 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.563 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:59.563 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.563 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:59.563 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:59.563 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:59.563 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:59.564 18:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:59.821 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:59.821 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:59.821 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:59.821 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:59.821 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.821 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:59.821 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.080 [2024-10-08 18:35:18.115143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:00.080 [2024-10-08 18:35:18.115194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.115221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:00.080 [2024-10-08 18:35:18.115236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.115250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:00.080 [2024-10-08 18:35:18.115265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.115296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:00.080 [2024-10-08 18:35:18.115313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.115326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df4b00 is same with the state(6) to be set 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.080 18:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:00.080 [2024-10-08 18:35:18.128171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df4b00 (9): Bad file descriptor 00:28:00.080 [2024-10-08 18:35:18.128283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.080 [2024-10-08 18:35:18.128758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.080 [2024-10-08 18:35:18.128777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.128793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.128807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.128822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.128836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.128851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.128866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.128881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.128895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.128911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.128925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.128942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.128956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.128971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.128985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.129974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.129988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.130003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.130018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.081 [2024-10-08 18:35:18.130033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.081 [2024-10-08 18:35:18.130046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.082 [2024-10-08 18:35:18.130075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.082 [2024-10-08 18:35:18.130105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.082 [2024-10-08 18:35:18.130134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.082 [2024-10-08 18:35:18.130164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.082 [2024-10-08 18:35:18.130193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.082 [2024-10-08 18:35:18.130223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.082 [2024-10-08 18:35:18.130252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.082 [2024-10-08 18:35:18.130305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.082 [2024-10-08 18:35:18.130337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.082 [2024-10-08 18:35:18.130371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:00.082 [2024-10-08 18:35:18.130453] bdev_nvme.c:1734:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200d5a0 was disconnected and freed. reset controller. 00:28:00.082 [2024-10-08 18:35:18.131586] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:00.082 task offset: 81920 on job bdev=Nvme0n1 fails 00:28:00.082 00:28:00.082 Latency(us) 00:28:00.082 [2024-10-08T16:35:18.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.082 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:00.082 Job: Nvme0n1 ended in about 0.40 seconds with error 00:28:00.082 Verification LBA range: start 0x0 length 0x400 00:28:00.082 Nvme0n1 : 0.40 1582.11 98.88 158.21 0.00 35723.96 2621.44 34175.81 00:28:00.082 [2024-10-08T16:35:18.369Z] =================================================================================================================== 00:28:00.082 [2024-10-08T16:35:18.369Z] Total : 1582.11 98.88 158.21 0.00 35723.96 2621.44 34175.81 00:28:00.082 [2024-10-08 18:35:18.133461] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:00.082 [2024-10-08 18:35:18.225429] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1377269 00:28:01.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1377269) - No such process 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:01.013 { 00:28:01.013 "params": { 00:28:01.013 "name": "Nvme$subsystem", 00:28:01.013 "trtype": "$TEST_TRANSPORT", 00:28:01.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.013 "adrfam": "ipv4", 00:28:01.013 "trsvcid": "$NVMF_PORT", 00:28:01.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.013 "hdgst": ${hdgst:-false}, 00:28:01.013 "ddgst": ${ddgst:-false} 00:28:01.013 }, 00:28:01.013 "method": "bdev_nvme_attach_controller" 00:28:01.013 } 00:28:01.013 EOF 00:28:01.013 )") 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:28:01.013 18:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:01.013 "params": { 00:28:01.013 "name": "Nvme0", 00:28:01.013 "trtype": "tcp", 00:28:01.013 "traddr": "10.0.0.2", 00:28:01.013 "adrfam": "ipv4", 00:28:01.013 "trsvcid": "4420", 00:28:01.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:01.013 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:01.013 "hdgst": false, 00:28:01.013 "ddgst": false 00:28:01.013 }, 00:28:01.013 "method": "bdev_nvme_attach_controller" 00:28:01.013 }' 00:28:01.013 [2024-10-08 18:35:19.177949] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:28:01.013 [2024-10-08 18:35:19.178044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377557 ] 00:28:01.013 [2024-10-08 18:35:19.238386] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.270 [2024-10-08 18:35:19.352761] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.528 Running I/O for 1 seconds... 00:28:02.459 1664.00 IOPS, 104.00 MiB/s 00:28:02.459 Latency(us) 00:28:02.459 [2024-10-08T16:35:20.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.459 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:02.459 Verification LBA range: start 0x0 length 0x400 00:28:02.459 Nvme0n1 : 1.01 1710.00 106.87 0.00 0.00 36809.41 4611.79 33399.09 00:28:02.459 [2024-10-08T16:35:20.746Z] =================================================================================================================== 00:28:02.459 [2024-10-08T16:35:20.746Z] Total : 1710.00 106.87 0.00 0.00 36809.41 4611.79 33399.09 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.717 rmmod nvme_tcp 00:28:02.717 rmmod nvme_fabrics 00:28:02.717 rmmod nvme_keyring 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1377104 ']' 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1377104 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1377104 ']' 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1377104 00:28:02.717 18:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:28:02.717 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:02.717 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1377104 00:28:02.975 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:02.975 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:02.975 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1377104' 00:28:02.975 killing process with pid 1377104 00:28:02.975 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1377104 00:28:02.975 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1377104 00:28:03.234 [2024-10-08 18:35:21.310974] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.234 18:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.136 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:05.136 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:05.136 00:28:05.136 real 0m9.061s 00:28:05.136 user 0m18.675s 00:28:05.136 sys 0m3.761s 00:28:05.136 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:05.136 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:05.136 ************************************ 00:28:05.136 END TEST nvmf_host_management 00:28:05.136 ************************************ 00:28:05.136 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:05.136 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:05.136 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:05.136 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:05.394 ************************************ 00:28:05.394 START TEST nvmf_lvol 00:28:05.394 ************************************ 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:05.394 * Looking for test storage... 00:28:05.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:05.394 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.395 --rc genhtml_branch_coverage=1 00:28:05.395 --rc genhtml_function_coverage=1 00:28:05.395 --rc genhtml_legend=1 00:28:05.395 --rc geninfo_all_blocks=1 00:28:05.395 --rc geninfo_unexecuted_blocks=1 00:28:05.395 00:28:05.395 ' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.395 --rc genhtml_branch_coverage=1 00:28:05.395 --rc genhtml_function_coverage=1 00:28:05.395 --rc genhtml_legend=1 00:28:05.395 --rc geninfo_all_blocks=1 00:28:05.395 --rc geninfo_unexecuted_blocks=1 00:28:05.395 00:28:05.395 ' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.395 --rc genhtml_branch_coverage=1 00:28:05.395 --rc genhtml_function_coverage=1 00:28:05.395 --rc genhtml_legend=1 00:28:05.395 --rc geninfo_all_blocks=1 00:28:05.395 --rc geninfo_unexecuted_blocks=1 00:28:05.395 00:28:05.395 ' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.395 --rc genhtml_branch_coverage=1 00:28:05.395 --rc genhtml_function_coverage=1 00:28:05.395 --rc genhtml_legend=1 00:28:05.395 --rc geninfo_all_blocks=1 00:28:05.395 --rc geninfo_unexecuted_blocks=1 00:28:05.395 00:28:05.395 ' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:05.395 18:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:07.297 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.297 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:28:07.297 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:07.297 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:07.297 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:07.297 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:07.297 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:07.297 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:28:07.297 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:07.297 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:07.556 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:07.556 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:07.556 Found net devices under 0000:09:00.0: cvl_0_0 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:07.556 Found net devices under 0000:09:00.1: cvl_0_1 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:07.556 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:07.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:28:07.557 00:28:07.557 --- 10.0.0.2 ping statistics --- 00:28:07.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.557 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:28:07.557 00:28:07.557 --- 10.0.0.1 ping statistics --- 00:28:07.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.557 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1379638 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1379638 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1379638 ']' 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:07.557 18:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:07.557 [2024-10-08 18:35:25.801376] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:07.557 [2024-10-08 18:35:25.802453] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:28:07.557 [2024-10-08 18:35:25.802508] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.814 [2024-10-08 18:35:25.869165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:07.814 [2024-10-08 18:35:25.979490] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.814 [2024-10-08 18:35:25.979545] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.814 [2024-10-08 18:35:25.979574] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.814 [2024-10-08 18:35:25.979586] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.814 [2024-10-08 18:35:25.979596] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.814 [2024-10-08 18:35:25.980441] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.814 [2024-10-08 18:35:25.980555] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:28:07.814 [2024-10-08 18:35:25.980559] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.814 [2024-10-08 18:35:26.087636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:07.814 [2024-10-08 18:35:26.087902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:07.814 [2024-10-08 18:35:26.087909] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:07.814 [2024-10-08 18:35:26.088184] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:08.071 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.071 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:28:08.071 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:08.071 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.071 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:08.071 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.071 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:08.328 [2024-10-08 18:35:26.385210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.328 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:08.586 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:28:08.586 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:08.844 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:28:08.844 18:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:28:09.101 18:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:28:09.358 18:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c5a9e1e7-a748-4e10-bf28-e471d8389b0d 00:28:09.358 18:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c5a9e1e7-a748-4e10-bf28-e471d8389b0d lvol 20 00:28:09.615 18:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=fd3f4b75-5f91-466c-aa43-c96e7690708f 00:28:09.616 18:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:09.873 18:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fd3f4b75-5f91-466c-aa43-c96e7690708f 00:28:10.130 18:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:10.387 [2024-10-08 18:35:28.613383] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.387 18:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:10.644 18:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1380061 00:28:10.645 18:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:28:10.645 18:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:28:12.017 18:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot fd3f4b75-5f91-466c-aa43-c96e7690708f MY_SNAPSHOT 00:28:12.017 18:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bc2ee974-9515-45b6-aa3d-9e14e1531a8a 00:28:12.017 18:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize fd3f4b75-5f91-466c-aa43-c96e7690708f 30 00:28:12.275 18:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bc2ee974-9515-45b6-aa3d-9e14e1531a8a MY_CLONE 00:28:12.840 18:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f9e9d631-f622-42eb-878f-9dea3b9fcf6c 00:28:12.840 18:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f9e9d631-f622-42eb-878f-9dea3b9fcf6c 00:28:13.404 18:35:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1380061 00:28:21.510 Initializing NVMe Controllers 00:28:21.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:21.510 Controller IO queue size 128, less than required. 00:28:21.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:21.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:28:21.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:28:21.510 Initialization complete. Launching workers. 00:28:21.510 ======================================================== 00:28:21.510 Latency(us) 00:28:21.510 Device Information : IOPS MiB/s Average min max 00:28:21.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10678.70 41.71 11988.81 3946.61 120202.46 00:28:21.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10519.00 41.09 12176.43 4362.00 71855.44 00:28:21.510 ======================================================== 00:28:21.510 Total : 21197.70 82.80 12081.91 3946.61 120202.46 00:28:21.510 00:28:21.510 18:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:21.510 18:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fd3f4b75-5f91-466c-aa43-c96e7690708f 00:28:21.510 18:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5a9e1e7-a748-4e10-bf28-e471d8389b0d 00:28:21.793 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:28:21.793 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:28:21.793 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:28:21.793 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:21.793 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:28:21.793 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:21.793 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:28:21.793 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:21.793 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:21.793 rmmod nvme_tcp 00:28:21.793 rmmod nvme_fabrics 00:28:22.092 rmmod nvme_keyring 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1379638 ']' 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1379638 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1379638 ']' 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1379638 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1379638 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1379638' 00:28:22.092 killing process with pid 1379638 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1379638 00:28:22.092 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1379638 00:28:22.367 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:22.367 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:22.367 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:22.367 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:28:22.367 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:28:22.367 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:22.367 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:28:22.368 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:22.368 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:22.368 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.368 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.368 18:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.271 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:24.271 00:28:24.271 real 0m19.088s 00:28:24.271 user 0m56.409s 00:28:24.271 sys 0m7.463s 00:28:24.271 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:24.271 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:28:24.271 ************************************ 00:28:24.271 END TEST nvmf_lvol 00:28:24.271 ************************************ 00:28:24.271 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:24.271 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:24.271 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:24.271 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:24.530 ************************************ 00:28:24.530 START TEST nvmf_lvs_grow 00:28:24.530 ************************************ 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:28:24.530 * Looking for test storage... 00:28:24.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:24.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.530 --rc genhtml_branch_coverage=1 00:28:24.530 --rc genhtml_function_coverage=1 00:28:24.530 --rc genhtml_legend=1 00:28:24.530 --rc geninfo_all_blocks=1 00:28:24.530 --rc geninfo_unexecuted_blocks=1 00:28:24.530 00:28:24.530 ' 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:24.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.530 --rc genhtml_branch_coverage=1 00:28:24.530 --rc genhtml_function_coverage=1 00:28:24.530 --rc genhtml_legend=1 00:28:24.530 --rc geninfo_all_blocks=1 00:28:24.530 --rc geninfo_unexecuted_blocks=1 00:28:24.530 00:28:24.530 ' 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:24.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.530 --rc genhtml_branch_coverage=1 00:28:24.530 --rc genhtml_function_coverage=1 00:28:24.530 --rc genhtml_legend=1 00:28:24.530 --rc geninfo_all_blocks=1 00:28:24.530 --rc geninfo_unexecuted_blocks=1 00:28:24.530 00:28:24.530 ' 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:24.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.530 --rc genhtml_branch_coverage=1 00:28:24.530 --rc genhtml_function_coverage=1 00:28:24.530 --rc genhtml_legend=1 00:28:24.530 --rc geninfo_all_blocks=1 00:28:24.530 --rc geninfo_unexecuted_blocks=1 00:28:24.530 00:28:24.530 ' 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:24.530 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.531 18:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:27.060 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.060 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:28:27.060 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:27.060 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:27.060 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:27.060 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:27.060 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:27.060 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:27.061 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:27.061 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:27.061 Found net devices under 0000:09:00.0: cvl_0_0 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:27.061 Found net devices under 0000:09:00.1: cvl_0_1 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:27.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:28:27.061 00:28:27.061 --- 10.0.0.2 ping statistics --- 00:28:27.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.061 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:28:27.061 00:28:27.061 --- 10.0.0.1 ping statistics --- 00:28:27.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.061 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.061 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:27.062 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:27.062 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.062 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:27.062 18:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1383323 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1383323 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1383323 ']' 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:27.062 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:27.062 [2024-10-08 18:35:45.077486] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:27.062 [2024-10-08 18:35:45.078557] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:28:27.062 [2024-10-08 18:35:45.078626] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.062 [2024-10-08 18:35:45.141881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.062 [2024-10-08 18:35:45.251644] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.062 [2024-10-08 18:35:45.251695] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.062 [2024-10-08 18:35:45.251718] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.062 [2024-10-08 18:35:45.251728] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.062 [2024-10-08 18:35:45.251745] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.062 [2024-10-08 18:35:45.252350] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:27.062 [2024-10-08 18:35:45.342038] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:27.062 [2024-10-08 18:35:45.342336] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:27.319 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:27.319 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:28:27.319 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:27.319 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:27.320 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:27.320 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.320 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:27.578 [2024-10-08 18:35:45.636991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:27.578 ************************************ 00:28:27.578 START TEST lvs_grow_clean 00:28:27.578 ************************************ 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:27.578 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:27.835 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:27.835 18:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:28.093 18:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a0163e41-3d99-496d-afb5-f1570d62a744 00:28:28.093 18:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:28.093 18:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:28.351 18:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:28.351 18:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:28.351 18:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0163e41-3d99-496d-afb5-f1570d62a744 lvol 150 00:28:28.609 18:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=636d3163-6e0e-4ed4-87a9-dc5160b6109a 00:28:28.609 18:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:28.609 18:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:28.867 [2024-10-08 18:35:47.084856] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:28.867 [2024-10-08 18:35:47.084946] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:28.867 true 00:28:28.867 18:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:28.867 18:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:29.124 18:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:29.124 18:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:29.383 18:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 636d3163-6e0e-4ed4-87a9-dc5160b6109a 00:28:29.948 18:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:29.948 [2024-10-08 18:35:48.189181] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.948 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:30.204 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1383762 00:28:30.205 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:30.205 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:30.205 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1383762 /var/tmp/bdevperf.sock 00:28:30.205 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1383762 ']' 00:28:30.205 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:30.205 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:30.205 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:30.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:30.205 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:30.205 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.461 [2024-10-08 18:35:48.520113] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:28:30.461 [2024-10-08 18:35:48.520195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383762 ] 00:28:30.461 [2024-10-08 18:35:48.579206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.461 [2024-10-08 18:35:48.695509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.719 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:30.719 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:28:30.719 18:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:30.976 Nvme0n1 00:28:30.976 18:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:31.234 [ 00:28:31.234 { 00:28:31.234 "name": "Nvme0n1", 00:28:31.234 "aliases": [ 00:28:31.234 "636d3163-6e0e-4ed4-87a9-dc5160b6109a" 00:28:31.234 ], 00:28:31.234 "product_name": "NVMe disk", 00:28:31.234 "block_size": 4096, 00:28:31.234 "num_blocks": 38912, 00:28:31.234 "uuid": "636d3163-6e0e-4ed4-87a9-dc5160b6109a", 00:28:31.234 "numa_id": 0, 00:28:31.234 "assigned_rate_limits": { 00:28:31.234 "rw_ios_per_sec": 0, 00:28:31.234 "rw_mbytes_per_sec": 0, 00:28:31.234 "r_mbytes_per_sec": 0, 00:28:31.234 "w_mbytes_per_sec": 0 00:28:31.234 }, 00:28:31.234 "claimed": false, 00:28:31.234 "zoned": false, 00:28:31.234 "supported_io_types": { 00:28:31.234 "read": true, 00:28:31.234 "write": true, 00:28:31.234 "unmap": true, 00:28:31.234 "flush": true, 00:28:31.234 "reset": true, 00:28:31.234 "nvme_admin": true, 00:28:31.234 "nvme_io": true, 00:28:31.234 "nvme_io_md": false, 00:28:31.234 "write_zeroes": true, 00:28:31.234 "zcopy": false, 00:28:31.234 "get_zone_info": false, 00:28:31.234 "zone_management": false, 00:28:31.234 "zone_append": false, 00:28:31.234 "compare": true, 00:28:31.234 "compare_and_write": true, 00:28:31.234 "abort": true, 00:28:31.234 "seek_hole": false, 00:28:31.234 "seek_data": false, 00:28:31.234 "copy": true, 00:28:31.234 "nvme_iov_md": false 00:28:31.234 }, 00:28:31.234 "memory_domains": [ 00:28:31.234 { 00:28:31.234 "dma_device_id": "system", 00:28:31.234 "dma_device_type": 1 00:28:31.234 } 00:28:31.234 ], 00:28:31.234 "driver_specific": { 00:28:31.234 "nvme": [ 00:28:31.234 { 00:28:31.234 "trid": { 00:28:31.234 "trtype": "TCP", 00:28:31.234 "adrfam": "IPv4", 00:28:31.234 "traddr": "10.0.0.2", 00:28:31.234 "trsvcid": "4420", 00:28:31.234 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:31.234 }, 00:28:31.234 "ctrlr_data": { 00:28:31.234 "cntlid": 1, 00:28:31.234 "vendor_id": "0x8086", 00:28:31.234 "model_number": "SPDK bdev Controller", 00:28:31.234 "serial_number": "SPDK0", 00:28:31.234 "firmware_revision": "25.01", 00:28:31.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.234 "oacs": { 00:28:31.234 "security": 0, 00:28:31.234 "format": 0, 00:28:31.234 "firmware": 0, 00:28:31.234 "ns_manage": 0 00:28:31.234 }, 00:28:31.234 "multi_ctrlr": true, 00:28:31.234 "ana_reporting": false 00:28:31.234 }, 00:28:31.234 "vs": { 00:28:31.234 "nvme_version": "1.3" 00:28:31.234 }, 00:28:31.234 "ns_data": { 00:28:31.234 "id": 1, 00:28:31.234 "can_share": true 00:28:31.234 } 00:28:31.234 } 00:28:31.234 ], 00:28:31.234 "mp_policy": "active_passive" 00:28:31.234 } 00:28:31.234 } 00:28:31.234 ] 00:28:31.234 18:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1383897 00:28:31.234 18:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:31.234 18:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:31.492 Running I/O for 10 seconds... 00:28:32.426 Latency(us) 00:28:32.426 [2024-10-08T16:35:50.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:32.426 Nvme0n1 : 1.00 14478.00 56.55 0.00 0.00 0.00 0.00 0.00 00:28:32.426 [2024-10-08T16:35:50.713Z] =================================================================================================================== 00:28:32.426 [2024-10-08T16:35:50.713Z] Total : 14478.00 56.55 0.00 0.00 0.00 0.00 0.00 00:28:32.426 00:28:33.360 18:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:33.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:33.360 Nvme0n1 : 2.00 14541.50 56.80 0.00 0.00 0.00 0.00 0.00 00:28:33.360 [2024-10-08T16:35:51.647Z] =================================================================================================================== 00:28:33.360 [2024-10-08T16:35:51.647Z] Total : 14541.50 56.80 0.00 0.00 0.00 0.00 0.00 00:28:33.360 00:28:33.618 true 00:28:33.618 18:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:33.618 18:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:33.876 18:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:33.876 18:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:33.876 18:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1383897 00:28:34.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:34.441 Nvme0n1 : 3.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:28:34.441 [2024-10-08T16:35:52.728Z] =================================================================================================================== 00:28:34.441 [2024-10-08T16:35:52.728Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:28:34.441 00:28:35.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:35.372 Nvme0n1 : 4.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:28:35.372 [2024-10-08T16:35:53.659Z] =================================================================================================================== 00:28:35.372 [2024-10-08T16:35:53.659Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:28:35.372 00:28:36.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:36.746 Nvme0n1 : 5.00 14967.40 58.47 0.00 0.00 0.00 0.00 0.00 00:28:36.746 [2024-10-08T16:35:55.033Z] =================================================================================================================== 00:28:36.746 [2024-10-08T16:35:55.033Z] Total : 14967.40 58.47 0.00 0.00 0.00 0.00 0.00 00:28:36.746 00:28:37.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:37.679 Nvme0n1 : 6.00 14991.67 58.56 0.00 0.00 0.00 0.00 0.00 00:28:37.679 [2024-10-08T16:35:55.966Z] =================================================================================================================== 00:28:37.679 [2024-10-08T16:35:55.966Z] Total : 14991.67 58.56 0.00 0.00 0.00 0.00 0.00 00:28:37.679 00:28:38.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:38.614 Nvme0n1 : 7.00 15045.29 58.77 0.00 0.00 0.00 0.00 0.00 00:28:38.614 [2024-10-08T16:35:56.901Z] =================================================================================================================== 00:28:38.614 [2024-10-08T16:35:56.901Z] Total : 15045.29 58.77 0.00 0.00 0.00 0.00 0.00 00:28:38.614 00:28:39.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:39.547 Nvme0n1 : 8.00 15101.38 58.99 0.00 0.00 0.00 0.00 0.00 00:28:39.547 [2024-10-08T16:35:57.834Z] =================================================================================================================== 00:28:39.547 [2024-10-08T16:35:57.834Z] Total : 15101.38 58.99 0.00 0.00 0.00 0.00 0.00 00:28:39.547 00:28:40.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:40.479 Nvme0n1 : 9.00 15145.00 59.16 0.00 0.00 0.00 0.00 0.00 00:28:40.479 [2024-10-08T16:35:58.766Z] =================================================================================================================== 00:28:40.479 [2024-10-08T16:35:58.766Z] Total : 15145.00 59.16 0.00 0.00 0.00 0.00 0.00 00:28:40.479 00:28:41.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:41.412 Nvme0n1 : 10.00 15179.90 59.30 0.00 0.00 0.00 0.00 0.00 00:28:41.412 [2024-10-08T16:35:59.699Z] =================================================================================================================== 00:28:41.412 [2024-10-08T16:35:59.699Z] Total : 15179.90 59.30 0.00 0.00 0.00 0.00 0.00 00:28:41.412 00:28:41.412 00:28:41.412 Latency(us) 00:28:41.412 [2024-10-08T16:35:59.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:41.412 Nvme0n1 : 10.01 15184.90 59.32 0.00 0.00 8424.99 4490.43 18738.44 00:28:41.412 [2024-10-08T16:35:59.699Z] =================================================================================================================== 00:28:41.412 [2024-10-08T16:35:59.699Z] Total : 15184.90 59.32 0.00 0.00 8424.99 4490.43 18738.44 00:28:41.412 { 00:28:41.412 "results": [ 00:28:41.412 { 00:28:41.412 "job": "Nvme0n1", 00:28:41.412 "core_mask": "0x2", 00:28:41.412 "workload": "randwrite", 00:28:41.412 "status": "finished", 00:28:41.412 "queue_depth": 128, 00:28:41.412 "io_size": 4096, 00:28:41.412 "runtime": 10.005136, 00:28:41.412 "iops": 15184.901034828512, 00:28:41.412 "mibps": 59.316019667298875, 00:28:41.412 "io_failed": 0, 00:28:41.412 "io_timeout": 0, 00:28:41.412 "avg_latency_us": 8424.98746020567, 00:28:41.412 "min_latency_us": 4490.42962962963, 00:28:41.412 "max_latency_us": 18738.44148148148 00:28:41.412 } 00:28:41.412 ], 00:28:41.412 "core_count": 1 00:28:41.412 } 00:28:41.412 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1383762 00:28:41.412 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1383762 ']' 00:28:41.413 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1383762 00:28:41.413 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:28:41.413 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:41.413 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1383762 00:28:41.413 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:41.413 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:41.413 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1383762' 00:28:41.413 killing process with pid 1383762 00:28:41.413 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1383762 00:28:41.413 Received shutdown signal, test time was about 10.000000 seconds 00:28:41.413 00:28:41.413 Latency(us) 00:28:41.413 [2024-10-08T16:35:59.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.413 [2024-10-08T16:35:59.700Z] =================================================================================================================== 00:28:41.413 [2024-10-08T16:35:59.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:41.413 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1383762 00:28:41.670 18:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:42.235 18:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:42.492 18:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:42.492 18:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:42.750 18:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:42.750 18:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:28:42.750 18:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:43.008 [2024-10-08 18:36:01.100902] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:43.008 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:43.265 request: 00:28:43.265 { 00:28:43.265 "uuid": "a0163e41-3d99-496d-afb5-f1570d62a744", 00:28:43.265 "method": "bdev_lvol_get_lvstores", 00:28:43.265 "req_id": 1 00:28:43.265 } 00:28:43.265 Got JSON-RPC error response 00:28:43.265 response: 00:28:43.265 { 00:28:43.265 "code": -19, 00:28:43.265 "message": "No such device" 00:28:43.265 } 00:28:43.265 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:28:43.265 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:43.265 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:43.265 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:43.265 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:43.523 aio_bdev 00:28:43.523 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 636d3163-6e0e-4ed4-87a9-dc5160b6109a 00:28:43.523 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=636d3163-6e0e-4ed4-87a9-dc5160b6109a 00:28:43.523 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:28:43.523 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:28:43.523 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:28:43.523 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:28:43.523 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:43.780 18:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 636d3163-6e0e-4ed4-87a9-dc5160b6109a -t 2000 00:28:44.037 [ 00:28:44.037 { 00:28:44.037 "name": "636d3163-6e0e-4ed4-87a9-dc5160b6109a", 00:28:44.037 "aliases": [ 00:28:44.037 "lvs/lvol" 00:28:44.037 ], 00:28:44.037 "product_name": "Logical Volume", 00:28:44.037 "block_size": 4096, 00:28:44.037 "num_blocks": 38912, 00:28:44.037 "uuid": "636d3163-6e0e-4ed4-87a9-dc5160b6109a", 00:28:44.037 "assigned_rate_limits": { 00:28:44.037 "rw_ios_per_sec": 0, 00:28:44.037 "rw_mbytes_per_sec": 0, 00:28:44.037 "r_mbytes_per_sec": 0, 00:28:44.037 "w_mbytes_per_sec": 0 00:28:44.037 }, 00:28:44.037 "claimed": false, 00:28:44.037 "zoned": false, 00:28:44.037 "supported_io_types": { 00:28:44.037 "read": true, 00:28:44.037 "write": true, 00:28:44.037 "unmap": true, 00:28:44.037 "flush": false, 00:28:44.037 "reset": true, 00:28:44.037 "nvme_admin": false, 00:28:44.037 "nvme_io": false, 00:28:44.037 "nvme_io_md": false, 00:28:44.037 "write_zeroes": true, 00:28:44.037 "zcopy": false, 00:28:44.037 "get_zone_info": false, 00:28:44.037 "zone_management": false, 00:28:44.037 "zone_append": false, 00:28:44.037 "compare": false, 00:28:44.037 "compare_and_write": false, 00:28:44.037 "abort": false, 00:28:44.037 "seek_hole": true, 00:28:44.037 "seek_data": true, 00:28:44.037 "copy": false, 00:28:44.037 "nvme_iov_md": false 00:28:44.037 }, 00:28:44.037 "driver_specific": { 00:28:44.037 "lvol": { 00:28:44.037 "lvol_store_uuid": "a0163e41-3d99-496d-afb5-f1570d62a744", 00:28:44.037 "base_bdev": "aio_bdev", 00:28:44.037 "thin_provision": false, 00:28:44.037 "num_allocated_clusters": 38, 00:28:44.037 "snapshot": false, 00:28:44.037 "clone": false, 00:28:44.037 "esnap_clone": false 00:28:44.037 } 00:28:44.037 } 00:28:44.037 } 00:28:44.037 ] 00:28:44.037 18:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:28:44.037 18:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:44.037 18:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:44.294 18:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:44.294 18:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:44.294 18:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:44.551 18:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:44.551 18:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 636d3163-6e0e-4ed4-87a9-dc5160b6109a 00:28:44.808 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0163e41-3d99-496d-afb5-f1570d62a744 00:28:45.065 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:45.322 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:45.580 00:28:45.580 real 0m17.935s 00:28:45.580 user 0m17.571s 00:28:45.580 sys 0m1.875s 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:28:45.580 ************************************ 00:28:45.580 END TEST lvs_grow_clean 00:28:45.580 ************************************ 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:45.580 ************************************ 00:28:45.580 START TEST lvs_grow_dirty 00:28:45.580 ************************************ 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:45.580 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:45.838 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:28:45.838 18:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:28:46.096 18:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5c3763b-0089-40ce-b019-8d8fed5099d8 00:28:46.096 18:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:28:46.096 18:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:28:46.353 18:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:28:46.353 18:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:28:46.353 18:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5c3763b-0089-40ce-b019-8d8fed5099d8 lvol 150 00:28:46.611 18:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d34e5a95-6ae7-49bd-8944-34b5a666cc25 00:28:46.611 18:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:46.611 18:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:28:46.868 [2024-10-08 18:36:05.116849] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:28:46.868 [2024-10-08 18:36:05.116949] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:28:46.868 true 00:28:46.868 18:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:28:46.868 18:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:28:47.126 18:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:28:47.126 18:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:47.691 18:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d34e5a95-6ae7-49bd-8944-34b5a666cc25 00:28:47.691 18:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:47.980 [2024-10-08 18:36:06.217158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.980 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:48.265 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1386040 00:28:48.265 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:28:48.265 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.265 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1386040 /var/tmp/bdevperf.sock 00:28:48.265 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1386040 ']' 00:28:48.265 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:48.265 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:48.265 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:48.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:48.265 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:48.265 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:48.523 [2024-10-08 18:36:06.553496] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:28:48.523 [2024-10-08 18:36:06.553596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386040 ] 00:28:48.523 [2024-10-08 18:36:06.614445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.523 [2024-10-08 18:36:06.728704] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.780 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:48.780 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:28:48.780 18:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:28:49.037 Nvme0n1 00:28:49.037 18:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:28:49.295 [ 00:28:49.295 { 00:28:49.295 "name": "Nvme0n1", 00:28:49.295 "aliases": [ 00:28:49.295 "d34e5a95-6ae7-49bd-8944-34b5a666cc25" 00:28:49.295 ], 00:28:49.295 "product_name": "NVMe disk", 00:28:49.295 "block_size": 4096, 00:28:49.295 "num_blocks": 38912, 00:28:49.295 "uuid": "d34e5a95-6ae7-49bd-8944-34b5a666cc25", 00:28:49.295 "numa_id": 0, 00:28:49.295 "assigned_rate_limits": { 00:28:49.295 "rw_ios_per_sec": 0, 00:28:49.295 "rw_mbytes_per_sec": 0, 00:28:49.295 "r_mbytes_per_sec": 0, 00:28:49.295 "w_mbytes_per_sec": 0 00:28:49.295 }, 00:28:49.295 "claimed": false, 00:28:49.295 "zoned": false, 00:28:49.295 "supported_io_types": { 00:28:49.295 "read": true, 00:28:49.295 "write": true, 00:28:49.295 "unmap": true, 00:28:49.295 "flush": true, 00:28:49.295 "reset": true, 00:28:49.295 "nvme_admin": true, 00:28:49.295 "nvme_io": true, 00:28:49.295 "nvme_io_md": false, 00:28:49.295 "write_zeroes": true, 00:28:49.295 "zcopy": false, 00:28:49.295 "get_zone_info": false, 00:28:49.295 "zone_management": false, 00:28:49.295 "zone_append": false, 00:28:49.295 "compare": true, 00:28:49.295 "compare_and_write": true, 00:28:49.295 "abort": true, 00:28:49.295 "seek_hole": false, 00:28:49.295 "seek_data": false, 00:28:49.295 "copy": true, 00:28:49.295 "nvme_iov_md": false 00:28:49.295 }, 00:28:49.295 "memory_domains": [ 00:28:49.295 { 00:28:49.295 "dma_device_id": "system", 00:28:49.295 "dma_device_type": 1 00:28:49.295 } 00:28:49.295 ], 00:28:49.295 "driver_specific": { 00:28:49.295 "nvme": [ 00:28:49.295 { 00:28:49.295 "trid": { 00:28:49.295 "trtype": "TCP", 00:28:49.295 "adrfam": "IPv4", 00:28:49.295 "traddr": "10.0.0.2", 00:28:49.295 "trsvcid": "4420", 00:28:49.295 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:49.295 }, 00:28:49.295 "ctrlr_data": { 00:28:49.295 "cntlid": 1, 00:28:49.295 "vendor_id": "0x8086", 00:28:49.295 "model_number": "SPDK bdev Controller", 00:28:49.295 "serial_number": "SPDK0", 00:28:49.295 "firmware_revision": "25.01", 00:28:49.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:49.295 "oacs": { 00:28:49.295 "security": 0, 00:28:49.295 "format": 0, 00:28:49.295 "firmware": 0, 00:28:49.295 "ns_manage": 0 00:28:49.295 }, 00:28:49.295 "multi_ctrlr": true, 00:28:49.295 "ana_reporting": false 00:28:49.295 }, 00:28:49.295 "vs": { 00:28:49.295 "nvme_version": "1.3" 00:28:49.295 }, 00:28:49.295 "ns_data": { 00:28:49.295 "id": 1, 00:28:49.295 "can_share": true 00:28:49.295 } 00:28:49.295 } 00:28:49.295 ], 00:28:49.295 "mp_policy": "active_passive" 00:28:49.295 } 00:28:49.295 } 00:28:49.295 ] 00:28:49.295 18:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1386330 00:28:49.295 18:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:28:49.295 18:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:49.553 Running I/O for 10 seconds... 00:28:50.488 Latency(us) 00:28:50.488 [2024-10-08T16:36:08.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:50.488 Nvme0n1 : 1.00 14351.00 56.06 0.00 0.00 0.00 0.00 0.00 00:28:50.488 [2024-10-08T16:36:08.775Z] =================================================================================================================== 00:28:50.488 [2024-10-08T16:36:08.775Z] Total : 14351.00 56.06 0.00 0.00 0.00 0.00 0.00 00:28:50.488 00:28:51.421 18:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:28:51.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:51.421 Nvme0n1 : 2.00 14541.50 56.80 0.00 0.00 0.00 0.00 0.00 00:28:51.421 [2024-10-08T16:36:09.708Z] =================================================================================================================== 00:28:51.421 [2024-10-08T16:36:09.708Z] Total : 14541.50 56.80 0.00 0.00 0.00 0.00 0.00 00:28:51.421 00:28:51.679 true 00:28:51.679 18:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:28:51.679 18:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:28:51.937 18:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:28:51.937 18:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:28:51.937 18:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1386330 00:28:52.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.502 Nvme0n1 : 3.00 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:28:52.502 [2024-10-08T16:36:10.789Z] =================================================================================================================== 00:28:52.502 [2024-10-08T16:36:10.789Z] Total : 14605.00 57.05 0.00 0.00 0.00 0.00 0.00 00:28:52.502 00:28:53.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:53.435 Nvme0n1 : 4.00 14700.25 57.42 0.00 0.00 0.00 0.00 0.00 00:28:53.435 [2024-10-08T16:36:11.723Z] =================================================================================================================== 00:28:53.436 [2024-10-08T16:36:11.723Z] Total : 14700.25 57.42 0.00 0.00 0.00 0.00 0.00 00:28:53.436 00:28:54.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:54.808 Nvme0n1 : 5.00 14782.80 57.75 0.00 0.00 0.00 0.00 0.00 00:28:54.808 [2024-10-08T16:36:13.095Z] =================================================================================================================== 00:28:54.808 [2024-10-08T16:36:13.095Z] Total : 14782.80 57.75 0.00 0.00 0.00 0.00 0.00 00:28:54.808 00:28:55.742 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:55.742 Nvme0n1 : 6.00 14837.83 57.96 0.00 0.00 0.00 0.00 0.00 00:28:55.742 [2024-10-08T16:36:14.029Z] =================================================================================================================== 00:28:55.742 [2024-10-08T16:36:14.029Z] Total : 14837.83 57.96 0.00 0.00 0.00 0.00 0.00 00:28:55.742 00:28:56.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:56.676 Nvme0n1 : 7.00 14895.29 58.18 0.00 0.00 0.00 0.00 0.00 00:28:56.676 [2024-10-08T16:36:14.963Z] =================================================================================================================== 00:28:56.676 [2024-10-08T16:36:14.963Z] Total : 14895.29 58.18 0.00 0.00 0.00 0.00 0.00 00:28:56.676 00:28:57.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:57.609 Nvme0n1 : 8.00 14954.25 58.42 0.00 0.00 0.00 0.00 0.00 00:28:57.609 [2024-10-08T16:36:15.896Z] =================================================================================================================== 00:28:57.609 [2024-10-08T16:36:15.896Z] Total : 14954.25 58.42 0.00 0.00 0.00 0.00 0.00 00:28:57.609 00:28:58.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:58.542 Nvme0n1 : 9.00 15000.11 58.59 0.00 0.00 0.00 0.00 0.00 00:28:58.542 [2024-10-08T16:36:16.829Z] =================================================================================================================== 00:28:58.542 [2024-10-08T16:36:16.829Z] Total : 15000.11 58.59 0.00 0.00 0.00 0.00 0.00 00:28:58.542 00:28:59.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:59.475 Nvme0n1 : 10.00 15024.10 58.69 0.00 0.00 0.00 0.00 0.00 00:28:59.475 [2024-10-08T16:36:17.762Z] =================================================================================================================== 00:28:59.475 [2024-10-08T16:36:17.762Z] Total : 15024.10 58.69 0.00 0.00 0.00 0.00 0.00 00:28:59.475 00:28:59.475 00:28:59.475 Latency(us) 00:28:59.475 [2024-10-08T16:36:17.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:59.475 Nvme0n1 : 10.01 15023.97 58.69 0.00 0.00 8515.01 7573.05 18932.62 00:28:59.475 [2024-10-08T16:36:17.762Z] =================================================================================================================== 00:28:59.475 [2024-10-08T16:36:17.762Z] Total : 15023.97 58.69 0.00 0.00 8515.01 7573.05 18932.62 00:28:59.475 { 00:28:59.475 "results": [ 00:28:59.475 { 00:28:59.475 "job": "Nvme0n1", 00:28:59.475 "core_mask": "0x2", 00:28:59.475 "workload": "randwrite", 00:28:59.475 "status": "finished", 00:28:59.475 "queue_depth": 128, 00:28:59.475 "io_size": 4096, 00:28:59.475 "runtime": 10.008608, 00:28:59.475 "iops": 15023.96736888886, 00:28:59.475 "mibps": 58.68737253472211, 00:28:59.475 "io_failed": 0, 00:28:59.475 "io_timeout": 0, 00:28:59.475 "avg_latency_us": 8515.00704691151, 00:28:59.475 "min_latency_us": 7573.0488888888885, 00:28:59.475 "max_latency_us": 18932.62222222222 00:28:59.475 } 00:28:59.475 ], 00:28:59.475 "core_count": 1 00:28:59.475 } 00:28:59.475 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1386040 00:28:59.475 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1386040 ']' 00:28:59.475 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1386040 00:28:59.475 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:28:59.475 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.475 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1386040 00:28:59.732 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:59.732 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:59.732 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1386040' 00:28:59.732 killing process with pid 1386040 00:28:59.732 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1386040 00:28:59.732 Received shutdown signal, test time was about 10.000000 seconds 00:28:59.732 00:28:59.732 Latency(us) 00:28:59.732 [2024-10-08T16:36:18.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.732 [2024-10-08T16:36:18.019Z] =================================================================================================================== 00:28:59.732 [2024-10-08T16:36:18.019Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.732 18:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1386040 00:28:59.989 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:00.254 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:00.512 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:29:00.512 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1383323 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1383323 00:29:00.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1383323 Killed "${NVMF_APP[@]}" "$@" 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1387997 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1387997 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1387997 ']' 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:00.771 18:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:00.771 [2024-10-08 18:36:19.009882] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:00.771 [2024-10-08 18:36:19.010889] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:00.771 [2024-10-08 18:36:19.010941] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.029 [2024-10-08 18:36:19.077126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.029 [2024-10-08 18:36:19.186256] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:01.029 [2024-10-08 18:36:19.186345] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:01.029 [2024-10-08 18:36:19.186360] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:01.029 [2024-10-08 18:36:19.186387] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:01.029 [2024-10-08 18:36:19.186397] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:01.029 [2024-10-08 18:36:19.186939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.029 [2024-10-08 18:36:19.271651] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:01.029 [2024-10-08 18:36:19.271961] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:01.029 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:01.029 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:29:01.029 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:01.029 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:01.029 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:01.029 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:01.286 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:01.543 [2024-10-08 18:36:19.581785] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:01.543 [2024-10-08 18:36:19.581935] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:01.543 [2024-10-08 18:36:19.581985] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:01.543 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:01.543 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d34e5a95-6ae7-49bd-8944-34b5a666cc25 00:29:01.543 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=d34e5a95-6ae7-49bd-8944-34b5a666cc25 00:29:01.543 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:01.543 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:29:01.543 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:01.543 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:01.543 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:01.800 18:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d34e5a95-6ae7-49bd-8944-34b5a666cc25 -t 2000 00:29:02.058 [ 00:29:02.058 { 00:29:02.058 "name": "d34e5a95-6ae7-49bd-8944-34b5a666cc25", 00:29:02.058 "aliases": [ 00:29:02.058 "lvs/lvol" 00:29:02.058 ], 00:29:02.058 "product_name": "Logical Volume", 00:29:02.058 "block_size": 4096, 00:29:02.058 "num_blocks": 38912, 00:29:02.058 "uuid": "d34e5a95-6ae7-49bd-8944-34b5a666cc25", 00:29:02.058 "assigned_rate_limits": { 00:29:02.058 "rw_ios_per_sec": 0, 00:29:02.058 "rw_mbytes_per_sec": 0, 00:29:02.058 "r_mbytes_per_sec": 0, 00:29:02.058 "w_mbytes_per_sec": 0 00:29:02.058 }, 00:29:02.058 "claimed": false, 00:29:02.058 "zoned": false, 00:29:02.058 "supported_io_types": { 00:29:02.058 "read": true, 00:29:02.058 "write": true, 00:29:02.058 "unmap": true, 00:29:02.058 "flush": false, 00:29:02.058 "reset": true, 00:29:02.058 "nvme_admin": false, 00:29:02.058 "nvme_io": false, 00:29:02.058 "nvme_io_md": false, 00:29:02.058 "write_zeroes": true, 00:29:02.058 "zcopy": false, 00:29:02.058 "get_zone_info": false, 00:29:02.058 "zone_management": false, 00:29:02.058 "zone_append": false, 00:29:02.058 "compare": false, 00:29:02.058 "compare_and_write": false, 00:29:02.058 "abort": false, 00:29:02.058 "seek_hole": true, 00:29:02.058 "seek_data": true, 00:29:02.058 "copy": false, 00:29:02.058 "nvme_iov_md": false 00:29:02.058 }, 00:29:02.058 "driver_specific": { 00:29:02.058 "lvol": { 00:29:02.058 "lvol_store_uuid": "a5c3763b-0089-40ce-b019-8d8fed5099d8", 00:29:02.058 "base_bdev": "aio_bdev", 00:29:02.058 "thin_provision": false, 00:29:02.058 "num_allocated_clusters": 38, 00:29:02.058 "snapshot": false, 00:29:02.058 "clone": false, 00:29:02.058 "esnap_clone": false 00:29:02.058 } 00:29:02.058 } 00:29:02.058 } 00:29:02.058 ] 00:29:02.058 18:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:29:02.058 18:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:29:02.058 18:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:02.315 18:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:02.315 18:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:29:02.315 18:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:02.573 18:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:02.573 18:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:02.831 [2024-10-08 18:36:20.983478] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:02.831 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:29:03.088 request: 00:29:03.088 { 00:29:03.088 "uuid": "a5c3763b-0089-40ce-b019-8d8fed5099d8", 00:29:03.088 "method": "bdev_lvol_get_lvstores", 00:29:03.088 "req_id": 1 00:29:03.088 } 00:29:03.088 Got JSON-RPC error response 00:29:03.088 response: 00:29:03.088 { 00:29:03.088 "code": -19, 00:29:03.088 "message": "No such device" 00:29:03.088 } 00:29:03.088 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:29:03.088 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:03.088 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:03.088 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:03.088 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:03.346 aio_bdev 00:29:03.346 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d34e5a95-6ae7-49bd-8944-34b5a666cc25 00:29:03.346 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=d34e5a95-6ae7-49bd-8944-34b5a666cc25 00:29:03.346 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:03.346 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:29:03.346 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:03.346 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:03.346 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:03.603 18:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d34e5a95-6ae7-49bd-8944-34b5a666cc25 -t 2000 00:29:03.860 [ 00:29:03.860 { 00:29:03.860 "name": "d34e5a95-6ae7-49bd-8944-34b5a666cc25", 00:29:03.860 "aliases": [ 00:29:03.860 "lvs/lvol" 00:29:03.860 ], 00:29:03.860 "product_name": "Logical Volume", 00:29:03.860 "block_size": 4096, 00:29:03.860 "num_blocks": 38912, 00:29:03.860 "uuid": "d34e5a95-6ae7-49bd-8944-34b5a666cc25", 00:29:03.860 "assigned_rate_limits": { 00:29:03.860 "rw_ios_per_sec": 0, 00:29:03.860 "rw_mbytes_per_sec": 0, 00:29:03.860 "r_mbytes_per_sec": 0, 00:29:03.860 "w_mbytes_per_sec": 0 00:29:03.860 }, 00:29:03.860 "claimed": false, 00:29:03.860 "zoned": false, 00:29:03.860 "supported_io_types": { 00:29:03.860 "read": true, 00:29:03.860 "write": true, 00:29:03.860 "unmap": true, 00:29:03.860 "flush": false, 00:29:03.860 "reset": true, 00:29:03.860 "nvme_admin": false, 00:29:03.860 "nvme_io": false, 00:29:03.860 "nvme_io_md": false, 00:29:03.860 "write_zeroes": true, 00:29:03.860 "zcopy": false, 00:29:03.860 "get_zone_info": false, 00:29:03.860 "zone_management": false, 00:29:03.860 "zone_append": false, 00:29:03.860 "compare": false, 00:29:03.860 "compare_and_write": false, 00:29:03.860 "abort": false, 00:29:03.860 "seek_hole": true, 00:29:03.860 "seek_data": true, 00:29:03.860 "copy": false, 00:29:03.860 "nvme_iov_md": false 00:29:03.860 }, 00:29:03.860 "driver_specific": { 00:29:03.860 "lvol": { 00:29:03.860 "lvol_store_uuid": "a5c3763b-0089-40ce-b019-8d8fed5099d8", 00:29:03.860 "base_bdev": "aio_bdev", 00:29:03.860 "thin_provision": false, 00:29:03.860 "num_allocated_clusters": 38, 00:29:03.860 "snapshot": false, 00:29:03.860 "clone": false, 00:29:03.860 "esnap_clone": false 00:29:03.860 } 00:29:03.860 } 00:29:03.860 } 00:29:03.860 ] 00:29:03.860 18:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:29:03.860 18:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:29:03.860 18:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:04.117 18:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:04.117 18:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:29:04.117 18:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:04.681 18:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:04.681 18:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d34e5a95-6ae7-49bd-8944-34b5a666cc25 00:29:04.681 18:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5c3763b-0089-40ce-b019-8d8fed5099d8 00:29:05.247 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:05.247 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:05.504 00:29:05.504 real 0m19.867s 00:29:05.504 user 0m37.004s 00:29:05.504 sys 0m4.621s 00:29:05.504 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:05.504 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:05.504 ************************************ 00:29:05.504 END TEST lvs_grow_dirty 00:29:05.504 ************************************ 00:29:05.504 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:05.504 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:05.505 nvmf_trace.0 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.505 rmmod nvme_tcp 00:29:05.505 rmmod nvme_fabrics 00:29:05.505 rmmod nvme_keyring 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1387997 ']' 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1387997 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1387997 ']' 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1387997 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1387997 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1387997' 00:29:05.505 killing process with pid 1387997 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1387997 00:29:05.505 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1387997 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.762 18:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:08.299 00:29:08.299 real 0m43.432s 00:29:08.299 user 0m56.432s 00:29:08.299 sys 0m8.560s 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:08.299 ************************************ 00:29:08.299 END TEST nvmf_lvs_grow 00:29:08.299 ************************************ 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:08.299 ************************************ 00:29:08.299 START TEST nvmf_bdev_io_wait 00:29:08.299 ************************************ 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:08.299 * Looking for test storage... 00:29:08.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:08.299 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:08.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.300 --rc genhtml_branch_coverage=1 00:29:08.300 --rc genhtml_function_coverage=1 00:29:08.300 --rc genhtml_legend=1 00:29:08.300 --rc geninfo_all_blocks=1 00:29:08.300 --rc geninfo_unexecuted_blocks=1 00:29:08.300 00:29:08.300 ' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:08.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.300 --rc genhtml_branch_coverage=1 00:29:08.300 --rc genhtml_function_coverage=1 00:29:08.300 --rc genhtml_legend=1 00:29:08.300 --rc geninfo_all_blocks=1 00:29:08.300 --rc geninfo_unexecuted_blocks=1 00:29:08.300 00:29:08.300 ' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:08.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.300 --rc genhtml_branch_coverage=1 00:29:08.300 --rc genhtml_function_coverage=1 00:29:08.300 --rc genhtml_legend=1 00:29:08.300 --rc geninfo_all_blocks=1 00:29:08.300 --rc geninfo_unexecuted_blocks=1 00:29:08.300 00:29:08.300 ' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:08.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:08.300 --rc genhtml_branch_coverage=1 00:29:08.300 --rc genhtml_function_coverage=1 00:29:08.300 --rc genhtml_legend=1 00:29:08.300 --rc geninfo_all_blocks=1 00:29:08.300 --rc geninfo_unexecuted_blocks=1 00:29:08.300 00:29:08.300 ' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:08.300 18:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:10.201 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:10.201 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.201 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:10.202 Found net devices under 0000:09:00.0: cvl_0_0 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:10.202 Found net devices under 0000:09:00.1: cvl_0_1 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.202 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:29:10.461 00:29:10.461 --- 10.0.0.2 ping statistics --- 00:29:10.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.461 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:29:10.461 00:29:10.461 --- 10.0.0.1 ping statistics --- 00:29:10.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.461 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1390524 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1390524 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1390524 ']' 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:10.461 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.461 [2024-10-08 18:36:28.608580] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:10.461 [2024-10-08 18:36:28.609660] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:10.461 [2024-10-08 18:36:28.609733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.461 [2024-10-08 18:36:28.672357] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:10.720 [2024-10-08 18:36:28.781865] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.720 [2024-10-08 18:36:28.781918] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.720 [2024-10-08 18:36:28.781942] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.721 [2024-10-08 18:36:28.781953] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.721 [2024-10-08 18:36:28.781962] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.721 [2024-10-08 18:36:28.783666] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.721 [2024-10-08 18:36:28.783760] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.721 [2024-10-08 18:36:28.783818] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.721 [2024-10-08 18:36:28.783822] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.721 [2024-10-08 18:36:28.784329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.721 [2024-10-08 18:36:28.944539] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:10.721 [2024-10-08 18:36:28.944714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:10.721 [2024-10-08 18:36:28.945650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:10.721 [2024-10-08 18:36:28.946466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.721 [2024-10-08 18:36:28.952518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.721 18:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.721 Malloc0 00:29:10.721 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.721 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.721 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.721 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:10.979 [2024-10-08 18:36:29.024721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1390669 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1390670 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1390673 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:10.979 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:10.979 { 00:29:10.979 "params": { 00:29:10.979 "name": "Nvme$subsystem", 00:29:10.979 "trtype": "$TEST_TRANSPORT", 00:29:10.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.979 "adrfam": "ipv4", 00:29:10.979 "trsvcid": "$NVMF_PORT", 00:29:10.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.980 "hdgst": ${hdgst:-false}, 00:29:10.980 "ddgst": ${ddgst:-false} 00:29:10.980 }, 00:29:10.980 "method": "bdev_nvme_attach_controller" 00:29:10.980 } 00:29:10.980 EOF 00:29:10.980 )") 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:10.980 { 00:29:10.980 "params": { 00:29:10.980 "name": "Nvme$subsystem", 00:29:10.980 "trtype": "$TEST_TRANSPORT", 00:29:10.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.980 "adrfam": "ipv4", 00:29:10.980 "trsvcid": "$NVMF_PORT", 00:29:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.980 "hdgst": ${hdgst:-false}, 00:29:10.980 "ddgst": ${ddgst:-false} 00:29:10.980 }, 00:29:10.980 "method": "bdev_nvme_attach_controller" 00:29:10.980 } 00:29:10.980 EOF 00:29:10.980 )") 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1390675 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:10.980 { 00:29:10.980 "params": { 00:29:10.980 "name": "Nvme$subsystem", 00:29:10.980 "trtype": "$TEST_TRANSPORT", 00:29:10.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.980 "adrfam": "ipv4", 00:29:10.980 "trsvcid": "$NVMF_PORT", 00:29:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.980 "hdgst": ${hdgst:-false}, 00:29:10.980 "ddgst": ${ddgst:-false} 00:29:10.980 }, 00:29:10.980 "method": "bdev_nvme_attach_controller" 00:29:10.980 } 00:29:10.980 EOF 00:29:10.980 )") 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:10.980 { 00:29:10.980 "params": { 00:29:10.980 "name": "Nvme$subsystem", 00:29:10.980 "trtype": "$TEST_TRANSPORT", 00:29:10.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.980 "adrfam": "ipv4", 00:29:10.980 "trsvcid": "$NVMF_PORT", 00:29:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.980 "hdgst": ${hdgst:-false}, 00:29:10.980 "ddgst": ${ddgst:-false} 00:29:10.980 }, 00:29:10.980 "method": "bdev_nvme_attach_controller" 00:29:10.980 } 00:29:10.980 EOF 00:29:10.980 )") 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1390669 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:10.980 "params": { 00:29:10.980 "name": "Nvme1", 00:29:10.980 "trtype": "tcp", 00:29:10.980 "traddr": "10.0.0.2", 00:29:10.980 "adrfam": "ipv4", 00:29:10.980 "trsvcid": "4420", 00:29:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.980 "hdgst": false, 00:29:10.980 "ddgst": false 00:29:10.980 }, 00:29:10.980 "method": "bdev_nvme_attach_controller" 00:29:10.980 }' 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:10.980 "params": { 00:29:10.980 "name": "Nvme1", 00:29:10.980 "trtype": "tcp", 00:29:10.980 "traddr": "10.0.0.2", 00:29:10.980 "adrfam": "ipv4", 00:29:10.980 "trsvcid": "4420", 00:29:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.980 "hdgst": false, 00:29:10.980 "ddgst": false 00:29:10.980 }, 00:29:10.980 "method": "bdev_nvme_attach_controller" 00:29:10.980 }' 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:10.980 "params": { 00:29:10.980 "name": "Nvme1", 00:29:10.980 "trtype": "tcp", 00:29:10.980 "traddr": "10.0.0.2", 00:29:10.980 "adrfam": "ipv4", 00:29:10.980 "trsvcid": "4420", 00:29:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.980 "hdgst": false, 00:29:10.980 "ddgst": false 00:29:10.980 }, 00:29:10.980 "method": "bdev_nvme_attach_controller" 00:29:10.980 }' 00:29:10.980 18:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:10.980 "params": { 00:29:10.980 "name": "Nvme1", 00:29:10.980 "trtype": "tcp", 00:29:10.980 "traddr": "10.0.0.2", 00:29:10.980 "adrfam": "ipv4", 00:29:10.980 "trsvcid": "4420", 00:29:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:10.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:10.980 "hdgst": false, 00:29:10.980 "ddgst": false 00:29:10.980 }, 00:29:10.980 "method": "bdev_nvme_attach_controller" 00:29:10.980 }' 00:29:10.980 [2024-10-08 18:36:29.077579] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:10.980 [2024-10-08 18:36:29.077576] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:10.980 [2024-10-08 18:36:29.077576] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:10.980 [2024-10-08 18:36:29.077576] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:10.980 [2024-10-08 18:36:29.077685] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:29:10.980 [2024-10-08 18:36:29.077703] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 18:36:29.077703] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-08 18:36:29.077703] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:29:10.980 --proc-type=auto ] 00:29:10.980 --proc-type=auto ] 00:29:10.980 [2024-10-08 18:36:29.260984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.238 [2024-10-08 18:36:29.362586] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:29:11.238 [2024-10-08 18:36:29.366118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.238 [2024-10-08 18:36:29.467496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.238 [2024-10-08 18:36:29.490205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.495 [2024-10-08 18:36:29.555019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.495 [2024-10-08 18:36:29.604108] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:29:11.495 [2024-10-08 18:36:29.651800] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:29:11.753 Running I/O for 1 seconds... 00:29:11.753 Running I/O for 1 seconds... 00:29:11.753 Running I/O for 1 seconds... 00:29:12.010 Running I/O for 1 seconds... 00:29:12.945 7897.00 IOPS, 30.85 MiB/s [2024-10-08T16:36:31.232Z] 10322.00 IOPS, 40.32 MiB/s 00:29:12.945 Latency(us) 00:29:12.945 [2024-10-08T16:36:31.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.945 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:29:12.945 Nvme1n1 : 1.01 10384.70 40.57 0.00 0.00 12278.50 6213.78 17282.09 00:29:12.945 [2024-10-08T16:36:31.232Z] =================================================================================================================== 00:29:12.945 [2024-10-08T16:36:31.232Z] Total : 10384.70 40.57 0.00 0.00 12278.50 6213.78 17282.09 00:29:12.945 00:29:12.945 Latency(us) 00:29:12.945 [2024-10-08T16:36:31.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.945 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:29:12.945 Nvme1n1 : 1.01 7949.86 31.05 0.00 0.00 16019.39 5412.79 20583.16 00:29:12.945 [2024-10-08T16:36:31.232Z] =================================================================================================================== 00:29:12.945 [2024-10-08T16:36:31.232Z] Total : 7949.86 31.05 0.00 0.00 16019.39 5412.79 20583.16 00:29:12.945 192872.00 IOPS, 753.41 MiB/s 00:29:12.945 Latency(us) 00:29:12.945 [2024-10-08T16:36:31.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.945 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:29:12.945 Nvme1n1 : 1.00 192504.23 751.97 0.00 0.00 661.32 318.58 1881.13 00:29:12.945 [2024-10-08T16:36:31.232Z] =================================================================================================================== 00:29:12.945 [2024-10-08T16:36:31.232Z] Total : 192504.23 751.97 0.00 0.00 661.32 318.58 1881.13 00:29:13.203 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1390670 00:29:13.203 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1390673 00:29:13.203 10307.00 IOPS, 40.26 MiB/s 00:29:13.203 Latency(us) 00:29:13.203 [2024-10-08T16:36:31.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.203 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:29:13.203 Nvme1n1 : 1.01 10386.97 40.57 0.00 0.00 12284.20 2657.85 19320.98 00:29:13.203 [2024-10-08T16:36:31.490Z] =================================================================================================================== 00:29:13.203 [2024-10-08T16:36:31.490Z] Total : 10386.97 40.57 0.00 0.00 12284.20 2657.85 19320.98 00:29:13.203 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1390675 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.461 rmmod nvme_tcp 00:29:13.461 rmmod nvme_fabrics 00:29:13.461 rmmod nvme_keyring 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1390524 ']' 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1390524 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1390524 ']' 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1390524 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1390524 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1390524' 00:29:13.461 killing process with pid 1390524 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1390524 00:29:13.461 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1390524 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.719 18:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.251 18:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.251 00:29:16.251 real 0m7.912s 00:29:16.251 user 0m17.136s 00:29:16.251 sys 0m4.662s 00:29:16.251 18:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:16.251 18:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:29:16.251 ************************************ 00:29:16.251 END TEST nvmf_bdev_io_wait 00:29:16.251 ************************************ 00:29:16.251 18:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:16.251 18:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:16.251 18:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:16.251 18:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:16.251 ************************************ 00:29:16.251 START TEST nvmf_queue_depth 00:29:16.251 ************************************ 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:29:16.251 * Looking for test storage... 00:29:16.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:16.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.251 --rc genhtml_branch_coverage=1 00:29:16.251 --rc genhtml_function_coverage=1 00:29:16.251 --rc genhtml_legend=1 00:29:16.251 --rc geninfo_all_blocks=1 00:29:16.251 --rc geninfo_unexecuted_blocks=1 00:29:16.251 00:29:16.251 ' 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:16.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.251 --rc genhtml_branch_coverage=1 00:29:16.251 --rc genhtml_function_coverage=1 00:29:16.251 --rc genhtml_legend=1 00:29:16.251 --rc geninfo_all_blocks=1 00:29:16.251 --rc geninfo_unexecuted_blocks=1 00:29:16.251 00:29:16.251 ' 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:16.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.251 --rc genhtml_branch_coverage=1 00:29:16.251 --rc genhtml_function_coverage=1 00:29:16.251 --rc genhtml_legend=1 00:29:16.251 --rc geninfo_all_blocks=1 00:29:16.251 --rc geninfo_unexecuted_blocks=1 00:29:16.251 00:29:16.251 ' 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:16.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.251 --rc genhtml_branch_coverage=1 00:29:16.251 --rc genhtml_function_coverage=1 00:29:16.251 --rc genhtml_legend=1 00:29:16.251 --rc geninfo_all_blocks=1 00:29:16.251 --rc geninfo_unexecuted_blocks=1 00:29:16.251 00:29:16.251 ' 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.251 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.252 18:36:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:18.153 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.153 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:18.154 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:18.154 Found net devices under 0000:09:00.0: cvl_0_0 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:18.154 Found net devices under 0000:09:00.1: cvl_0_1 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:29:18.154 00:29:18.154 --- 10.0.0.2 ping statistics --- 00:29:18.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.154 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:29:18.154 00:29:18.154 --- 10.0.0.1 ping statistics --- 00:29:18.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.154 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:18.154 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1392905 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1392905 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1392905 ']' 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:18.412 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.412 [2024-10-08 18:36:36.504187] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:18.412 [2024-10-08 18:36:36.505341] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:18.412 [2024-10-08 18:36:36.505401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.412 [2024-10-08 18:36:36.577587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.412 [2024-10-08 18:36:36.688690] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.412 [2024-10-08 18:36:36.688747] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.412 [2024-10-08 18:36:36.688760] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.412 [2024-10-08 18:36:36.688771] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.412 [2024-10-08 18:36:36.688781] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.412 [2024-10-08 18:36:36.689396] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.671 [2024-10-08 18:36:36.780376] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:18.671 [2024-10-08 18:36:36.780708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.671 [2024-10-08 18:36:36.829987] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.671 Malloc0 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.671 [2024-10-08 18:36:36.890124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1393041 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1393041 /var/tmp/bdevperf.sock 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1393041 ']' 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:18.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:18.671 18:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:18.671 [2024-10-08 18:36:36.937215] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:18.671 [2024-10-08 18:36:36.937308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1393041 ] 00:29:18.930 [2024-10-08 18:36:36.993492] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.930 [2024-10-08 18:36:37.101873] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.930 18:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:18.930 18:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:29:18.930 18:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.187 18:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.187 18:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 NVMe0n1 00:29:19.187 18:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.187 18:36:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:19.444 Running I/O for 10 seconds... 00:29:21.309 8192.00 IOPS, 32.00 MiB/s [2024-10-08T16:36:40.982Z] 8462.50 IOPS, 33.06 MiB/s [2024-10-08T16:36:41.622Z] 8523.33 IOPS, 33.29 MiB/s [2024-10-08T16:36:42.997Z] 8451.25 IOPS, 33.01 MiB/s [2024-10-08T16:36:43.929Z] 8527.60 IOPS, 33.31 MiB/s [2024-10-08T16:36:44.861Z] 8535.50 IOPS, 33.34 MiB/s [2024-10-08T16:36:45.791Z] 8591.71 IOPS, 33.56 MiB/s [2024-10-08T16:36:46.723Z] 8579.38 IOPS, 33.51 MiB/s [2024-10-08T16:36:47.654Z] 8614.56 IOPS, 33.65 MiB/s [2024-10-08T16:36:47.654Z] 8604.90 IOPS, 33.61 MiB/s 00:29:29.367 Latency(us) 00:29:29.367 [2024-10-08T16:36:47.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.367 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:29:29.367 Verification LBA range: start 0x0 length 0x4000 00:29:29.367 NVMe0n1 : 10.07 8645.02 33.77 0.00 0.00 117998.84 16505.36 76895.57 00:29:29.367 [2024-10-08T16:36:47.654Z] =================================================================================================================== 00:29:29.367 [2024-10-08T16:36:47.654Z] Total : 8645.02 33.77 0.00 0.00 117998.84 16505.36 76895.57 00:29:29.367 { 00:29:29.367 "results": [ 00:29:29.368 { 00:29:29.368 "job": "NVMe0n1", 00:29:29.368 "core_mask": "0x1", 00:29:29.368 "workload": "verify", 00:29:29.368 "status": "finished", 00:29:29.368 "verify_range": { 00:29:29.368 "start": 0, 00:29:29.368 "length": 16384 00:29:29.368 }, 00:29:29.368 "queue_depth": 1024, 00:29:29.368 "io_size": 4096, 00:29:29.368 "runtime": 10.072039, 00:29:29.368 "iops": 8645.022125112899, 00:29:29.368 "mibps": 33.76961767622226, 00:29:29.368 "io_failed": 0, 00:29:29.368 "io_timeout": 0, 00:29:29.368 "avg_latency_us": 117998.8389987286, 00:29:29.368 "min_latency_us": 16505.36296296296, 00:29:29.368 "max_latency_us": 76895.57333333333 00:29:29.368 } 00:29:29.368 ], 00:29:29.368 "core_count": 1 00:29:29.368 } 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1393041 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1393041 ']' 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1393041 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1393041 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1393041' 00:29:29.625 killing process with pid 1393041 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1393041 00:29:29.625 Received shutdown signal, test time was about 10.000000 seconds 00:29:29.625 00:29:29.625 Latency(us) 00:29:29.625 [2024-10-08T16:36:47.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.625 [2024-10-08T16:36:47.912Z] =================================================================================================================== 00:29:29.625 [2024-10-08T16:36:47.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.625 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1393041 00:29:29.883 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:29:29.883 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:29:29.883 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:29.883 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:29:29.883 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:29.883 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:29:29.883 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:29.883 18:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:29.883 rmmod nvme_tcp 00:29:29.883 rmmod nvme_fabrics 00:29:29.883 rmmod nvme_keyring 00:29:29.883 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.883 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:29:29.883 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:29:29.883 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1392905 ']' 00:29:29.883 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1392905 00:29:29.884 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1392905 ']' 00:29:29.884 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1392905 00:29:29.884 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:29:29.884 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:29.884 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1392905 00:29:29.884 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:29.884 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:29.884 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1392905' 00:29:29.884 killing process with pid 1392905 00:29:29.884 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1392905 00:29:29.884 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1392905 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.142 18:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:32.669 00:29:32.669 real 0m16.353s 00:29:32.669 user 0m22.537s 00:29:32.669 sys 0m3.509s 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:29:32.669 ************************************ 00:29:32.669 END TEST nvmf_queue_depth 00:29:32.669 ************************************ 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:32.669 ************************************ 00:29:32.669 START TEST nvmf_target_multipath 00:29:32.669 ************************************ 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:29:32.669 * Looking for test storage... 00:29:32.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:32.669 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:32.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.670 --rc genhtml_branch_coverage=1 00:29:32.670 --rc genhtml_function_coverage=1 00:29:32.670 --rc genhtml_legend=1 00:29:32.670 --rc geninfo_all_blocks=1 00:29:32.670 --rc geninfo_unexecuted_blocks=1 00:29:32.670 00:29:32.670 ' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:32.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.670 --rc genhtml_branch_coverage=1 00:29:32.670 --rc genhtml_function_coverage=1 00:29:32.670 --rc genhtml_legend=1 00:29:32.670 --rc geninfo_all_blocks=1 00:29:32.670 --rc geninfo_unexecuted_blocks=1 00:29:32.670 00:29:32.670 ' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:32.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.670 --rc genhtml_branch_coverage=1 00:29:32.670 --rc genhtml_function_coverage=1 00:29:32.670 --rc genhtml_legend=1 00:29:32.670 --rc geninfo_all_blocks=1 00:29:32.670 --rc geninfo_unexecuted_blocks=1 00:29:32.670 00:29:32.670 ' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:32.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.670 --rc genhtml_branch_coverage=1 00:29:32.670 --rc genhtml_function_coverage=1 00:29:32.670 --rc genhtml_legend=1 00:29:32.670 --rc geninfo_all_blocks=1 00:29:32.670 --rc geninfo_unexecuted_blocks=1 00:29:32.670 00:29:32.670 ' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.670 18:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:34.572 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:34.572 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:34.572 Found net devices under 0000:09:00.0: cvl_0_0 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:34.572 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:34.573 Found net devices under 0000:09:00.1: cvl_0_1 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:34.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:29:34.573 00:29:34.573 --- 10.0.0.2 ping statistics --- 00:29:34.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.573 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:29:34.573 00:29:34.573 --- 10.0.0.1 ping statistics --- 00:29:34.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.573 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:29:34.573 only one NIC for nvmf test 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:34.573 rmmod nvme_tcp 00:29:34.573 rmmod nvme_fabrics 00:29:34.573 rmmod nvme_keyring 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.573 18:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.114 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.114 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:29:37.114 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:29:37.114 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:37.114 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.115 00:29:37.115 real 0m4.448s 00:29:37.115 user 0m0.900s 00:29:37.115 sys 0m1.541s 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:37.115 ************************************ 00:29:37.115 END TEST nvmf_target_multipath 00:29:37.115 ************************************ 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:37.115 ************************************ 00:29:37.115 START TEST nvmf_zcopy 00:29:37.115 ************************************ 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:29:37.115 * Looking for test storage... 00:29:37.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:29:37.115 18:36:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:37.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.115 --rc genhtml_branch_coverage=1 00:29:37.115 --rc genhtml_function_coverage=1 00:29:37.115 --rc genhtml_legend=1 00:29:37.115 --rc geninfo_all_blocks=1 00:29:37.115 --rc geninfo_unexecuted_blocks=1 00:29:37.115 00:29:37.115 ' 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:37.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.115 --rc genhtml_branch_coverage=1 00:29:37.115 --rc genhtml_function_coverage=1 00:29:37.115 --rc genhtml_legend=1 00:29:37.115 --rc geninfo_all_blocks=1 00:29:37.115 --rc geninfo_unexecuted_blocks=1 00:29:37.115 00:29:37.115 ' 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:37.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.115 --rc genhtml_branch_coverage=1 00:29:37.115 --rc genhtml_function_coverage=1 00:29:37.115 --rc genhtml_legend=1 00:29:37.115 --rc geninfo_all_blocks=1 00:29:37.115 --rc geninfo_unexecuted_blocks=1 00:29:37.115 00:29:37.115 ' 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:37.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.115 --rc genhtml_branch_coverage=1 00:29:37.115 --rc genhtml_function_coverage=1 00:29:37.115 --rc genhtml_legend=1 00:29:37.115 --rc geninfo_all_blocks=1 00:29:37.115 --rc geninfo_unexecuted_blocks=1 00:29:37.115 00:29:37.115 ' 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.115 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:29:37.116 18:36:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:39.013 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:39.014 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:39.014 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:39.014 Found net devices under 0000:09:00.0: cvl_0_0 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:39.014 Found net devices under 0000:09:00.1: cvl_0_1 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:39.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:39.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:29:39.014 00:29:39.014 --- 10.0.0.2 ping statistics --- 00:29:39.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.014 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:39.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:39.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:29:39.014 00:29:39.014 --- 10.0.0.1 ping statistics --- 00:29:39.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:39.014 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1398107 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1398107 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1398107 ']' 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.014 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:39.015 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.015 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:39.015 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.272 [2024-10-08 18:36:57.326531] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:39.272 [2024-10-08 18:36:57.327663] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:39.272 [2024-10-08 18:36:57.327736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:39.272 [2024-10-08 18:36:57.394338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.272 [2024-10-08 18:36:57.503348] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:39.272 [2024-10-08 18:36:57.503415] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:39.272 [2024-10-08 18:36:57.503445] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:39.272 [2024-10-08 18:36:57.503456] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:39.272 [2024-10-08 18:36:57.503466] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:39.272 [2024-10-08 18:36:57.504082] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.530 [2024-10-08 18:36:57.600548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:39.530 [2024-10-08 18:36:57.600875] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.530 [2024-10-08 18:36:57.656737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.530 [2024-10-08 18:36:57.672896] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.530 malloc0 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:39.530 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:39.530 { 00:29:39.530 "params": { 00:29:39.530 "name": "Nvme$subsystem", 00:29:39.530 "trtype": "$TEST_TRANSPORT", 00:29:39.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:39.530 "adrfam": "ipv4", 00:29:39.530 "trsvcid": "$NVMF_PORT", 00:29:39.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:39.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:39.530 "hdgst": ${hdgst:-false}, 00:29:39.530 "ddgst": ${ddgst:-false} 00:29:39.530 }, 00:29:39.531 "method": "bdev_nvme_attach_controller" 00:29:39.531 } 00:29:39.531 EOF 00:29:39.531 )") 00:29:39.531 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:29:39.531 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:29:39.531 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:29:39.531 18:36:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:39.531 "params": { 00:29:39.531 "name": "Nvme1", 00:29:39.531 "trtype": "tcp", 00:29:39.531 "traddr": "10.0.0.2", 00:29:39.531 "adrfam": "ipv4", 00:29:39.531 "trsvcid": "4420", 00:29:39.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:39.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:39.531 "hdgst": false, 00:29:39.531 "ddgst": false 00:29:39.531 }, 00:29:39.531 "method": "bdev_nvme_attach_controller" 00:29:39.531 }' 00:29:39.531 [2024-10-08 18:36:57.773381] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:39.531 [2024-10-08 18:36:57.773467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398245 ] 00:29:39.788 [2024-10-08 18:36:57.839141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.788 [2024-10-08 18:36:57.950324] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.046 Running I/O for 10 seconds... 00:29:41.910 5571.00 IOPS, 43.52 MiB/s [2024-10-08T16:37:01.569Z] 5644.50 IOPS, 44.10 MiB/s [2024-10-08T16:37:02.501Z] 5675.33 IOPS, 44.34 MiB/s [2024-10-08T16:37:03.435Z] 5671.25 IOPS, 44.31 MiB/s [2024-10-08T16:37:04.367Z] 5684.00 IOPS, 44.41 MiB/s [2024-10-08T16:37:05.300Z] 5684.67 IOPS, 44.41 MiB/s [2024-10-08T16:37:06.233Z] 5682.14 IOPS, 44.39 MiB/s [2024-10-08T16:37:07.168Z] 5689.38 IOPS, 44.45 MiB/s [2024-10-08T16:37:08.539Z] 5703.78 IOPS, 44.56 MiB/s [2024-10-08T16:37:08.539Z] 5703.20 IOPS, 44.56 MiB/s 00:29:50.252 Latency(us) 00:29:50.252 [2024-10-08T16:37:08.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.252 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:29:50.252 Verification LBA range: start 0x0 length 0x1000 00:29:50.252 Nvme1n1 : 10.01 5707.44 44.59 0.00 0.00 22365.13 2378.71 29515.47 00:29:50.252 [2024-10-08T16:37:08.539Z] =================================================================================================================== 00:29:50.252 [2024-10-08T16:37:08.539Z] Total : 5707.44 44.59 0.00 0.00 22365.13 2378.71 29515.47 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1399436 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:50.252 { 00:29:50.252 "params": { 00:29:50.252 "name": "Nvme$subsystem", 00:29:50.252 "trtype": "$TEST_TRANSPORT", 00:29:50.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.252 "adrfam": "ipv4", 00:29:50.252 "trsvcid": "$NVMF_PORT", 00:29:50.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.252 "hdgst": ${hdgst:-false}, 00:29:50.252 "ddgst": ${ddgst:-false} 00:29:50.252 }, 00:29:50.252 "method": "bdev_nvme_attach_controller" 00:29:50.252 } 00:29:50.252 EOF 00:29:50.252 )") 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:29:50.252 [2024-10-08 18:37:08.428683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.428720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:29:50.252 18:37:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:50.252 "params": { 00:29:50.252 "name": "Nvme1", 00:29:50.252 "trtype": "tcp", 00:29:50.252 "traddr": "10.0.0.2", 00:29:50.252 "adrfam": "ipv4", 00:29:50.252 "trsvcid": "4420", 00:29:50.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.252 "hdgst": false, 00:29:50.252 "ddgst": false 00:29:50.252 }, 00:29:50.252 "method": "bdev_nvme_attach_controller" 00:29:50.252 }' 00:29:50.252 [2024-10-08 18:37:08.436608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.436630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.444605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.444627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.452603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.452624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.460602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.460623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.468609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.468628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.469703] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:29:50.252 [2024-10-08 18:37:08.469781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1399436 ] 00:29:50.252 [2024-10-08 18:37:08.476602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.476623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.484616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.484636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.492594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.492616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.500602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.500622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.508598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.508617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.516599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.516619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.524603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.524623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.252 [2024-10-08 18:37:08.531407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.252 [2024-10-08 18:37:08.532597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.252 [2024-10-08 18:37:08.532616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.540640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.540688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.548611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.548649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.556597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.556617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.564598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.564618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.572598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.572617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.580598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.580617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.588598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.588617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.596647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.596679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.604598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.604617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.612619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.612653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.620598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.620618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.628598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.628617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.636597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.636616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.643119] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.510 [2024-10-08 18:37:08.644596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.644615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.652600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.652619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.660641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.660671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.668629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.668678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.676645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.676678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.684628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.684675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.692634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.692682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.700632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.700681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.708602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.708622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.716628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.716674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.724645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.724678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.732598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.732618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.740598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.740617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.748610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.748649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.756609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.756654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.764602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.764624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.772728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.772754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.780603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.780625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.788719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.788744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.510 [2024-10-08 18:37:08.796589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.510 [2024-10-08 18:37:08.796626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.804609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.804647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.812606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.812628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 Running I/O for 5 seconds... 00:29:50.768 [2024-10-08 18:37:08.828506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.828536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.838249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.838281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.850422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.850450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.864857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.864897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.874405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.874433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.886662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.886688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.901869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.901895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.911763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.911788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.926326] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.926352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.936303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.936342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.948086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.948109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.958583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.958608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.972495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.972524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.982474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.982500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:08.994772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.768 [2024-10-08 18:37:08.994797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.768 [2024-10-08 18:37:09.007929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.769 [2024-10-08 18:37:09.007955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.769 [2024-10-08 18:37:09.017695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.769 [2024-10-08 18:37:09.017719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.769 [2024-10-08 18:37:09.029887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.769 [2024-10-08 18:37:09.029912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.769 [2024-10-08 18:37:09.041480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.769 [2024-10-08 18:37:09.041507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:50.769 [2024-10-08 18:37:09.052747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:50.769 [2024-10-08 18:37:09.052773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.063344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.063371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.075460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.075487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.086514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.086539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.102555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.102599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.112415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.112442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.124511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.124536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.135186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.135211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.148758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.148797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.158481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.158506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.170133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.170157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.185701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.185739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.194989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.195014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.211427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.211453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.227342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.227367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.242922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.242949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.252800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.252825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.264982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.265005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.275998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.276022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.290902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.290926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.299999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.300023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.027 [2024-10-08 18:37:09.311905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.027 [2024-10-08 18:37:09.311929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.324496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.324525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.334035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.334060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.350377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.350404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.359968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.359994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.372093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.372117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.382771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.382796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.398714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.398739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.407890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.407916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.419783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.419808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.430546] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.430585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.446850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.446874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.457013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.457038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.470898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.470922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.486757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.486782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.496312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.496353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.508262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.508312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.519346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.519372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.534140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.534167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.543473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.543500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.557627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.557652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.285 [2024-10-08 18:37:09.567315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.285 [2024-10-08 18:37:09.567355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.543 [2024-10-08 18:37:09.583177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.543 [2024-10-08 18:37:09.583201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.543 [2024-10-08 18:37:09.599388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.543 [2024-10-08 18:37:09.599419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.543 [2024-10-08 18:37:09.609660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.543 [2024-10-08 18:37:09.609685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.543 [2024-10-08 18:37:09.621750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.543 [2024-10-08 18:37:09.621777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.543 [2024-10-08 18:37:09.632670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.632695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.643536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.643562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.657735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.657762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.667377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.667403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.681329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.681370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.690770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.690795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.704795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.704819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.714049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.714074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.726009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.726033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.741976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.742001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.751912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.751936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.763732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.763756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.777976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.778017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.787420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.787445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.802081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.802105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 [2024-10-08 18:37:09.811472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.811497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.544 11467.00 IOPS, 89.59 MiB/s [2024-10-08T16:37:09.831Z] [2024-10-08 18:37:09.825502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.544 [2024-10-08 18:37:09.825527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.835127] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.835152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.849270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.849303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.858860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.858885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.873727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.873760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.883168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.883193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.899518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.899544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.913743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.913769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.923602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.923628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.938426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.938453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.954254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.954305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.964382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.964410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.976227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.976251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:09.987057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:09.987081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:10.001587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:10.001620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:10.021617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:10.021652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:10.031310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:10.031338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:10.046525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:10.046578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:10.055876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:10.055901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:10.067978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:10.068002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:51.802 [2024-10-08 18:37:10.078377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:51.802 [2024-10-08 18:37:10.078404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.095562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.095603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.110126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.110153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.120565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.120617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.132512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.132539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.143892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.143933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.156864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.156892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.166673] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.166697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.178868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.178892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.194314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.194353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.203548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.203591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.217390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.217417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.227070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.227111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.238647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.238672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.254587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.254628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.264199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.264223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.276176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.276201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.287027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.287051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.059 [2024-10-08 18:37:10.300556] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.059 [2024-10-08 18:37:10.300582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.060 [2024-10-08 18:37:10.310169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.060 [2024-10-08 18:37:10.310194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.060 [2024-10-08 18:37:10.325966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.060 [2024-10-08 18:37:10.326006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.060 [2024-10-08 18:37:10.335823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.060 [2024-10-08 18:37:10.335862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.347883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.347917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.358702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.358741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.374497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.374523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.384406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.384431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.396189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.396213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.407090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.407115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.419881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.419909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.434156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.434183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.443911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.443950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.455685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.455711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.469417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.469444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.479135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.479160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.492938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.492962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.503040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.503063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.517628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.517652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.527835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.527873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.540191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.540216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.551468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.551494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.566441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.566467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.575753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.575777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.589726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.589752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.317 [2024-10-08 18:37:10.599618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.317 [2024-10-08 18:37:10.599657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.615426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.615452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.629699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.629726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.639099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.639139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.654959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.654985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.672828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.672852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.682610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.682634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.694707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.694732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.710359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.710387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.719987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.720011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.732151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.732177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.743352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.743393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.758592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.758618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.768551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.768590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.780614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.780644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.791821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.791844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.804831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.804858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.814414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.814441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 11490.00 IOPS, 89.77 MiB/s [2024-10-08T16:37:10.862Z] [2024-10-08 18:37:10.826587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.826613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.842065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.842106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.575 [2024-10-08 18:37:10.851648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.575 [2024-10-08 18:37:10.851673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.833 [2024-10-08 18:37:10.867545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.833 [2024-10-08 18:37:10.867587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.833 [2024-10-08 18:37:10.878296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.833 [2024-10-08 18:37:10.878320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.833 [2024-10-08 18:37:10.894162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.833 [2024-10-08 18:37:10.894186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.833 [2024-10-08 18:37:10.904503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.833 [2024-10-08 18:37:10.904531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.833 [2024-10-08 18:37:10.916553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.833 [2024-10-08 18:37:10.916592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.833 [2024-10-08 18:37:10.927380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.833 [2024-10-08 18:37:10.927407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.833 [2024-10-08 18:37:10.943035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.833 [2024-10-08 18:37:10.943061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.833 [2024-10-08 18:37:10.958930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.833 [2024-10-08 18:37:10.958969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:10.968513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:10.968540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:10.980873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:10.980897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:10.992261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:10.992309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:11.005039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:11.005065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:11.014360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:11.014386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:11.026708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:11.026734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:11.042481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:11.042516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:11.052247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:11.052297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:11.064086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:11.064111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:11.074959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:11.074982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:11.089432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:11.089458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:11.098657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:11.098681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:52.834 [2024-10-08 18:37:11.110660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:52.834 [2024-10-08 18:37:11.110684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.127340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.127368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.142379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.142406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.152122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.152148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.164402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.164429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.175701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.175726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.190320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.190347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.200176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.200202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.211958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.211997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.222167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.222192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.234261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.234309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.249049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.249089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.258888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.258912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.271115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.271148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.283887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.283913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.293570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.293610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.305971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.305995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.321730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.321754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.331538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.331565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.347380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.347405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.362251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.362301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.092 [2024-10-08 18:37:11.372011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.092 [2024-10-08 18:37:11.372035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.384535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.384560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.395233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.395256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.410390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.410417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.419981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.420008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.432240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.432265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.443194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.443220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.457774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.457814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.467046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.467070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.482064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.482088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.492188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.492212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.504272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.504328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.515655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.515680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.531085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.531108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.548359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.548384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.557773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.557798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.569385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.569424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.579837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.579860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.593968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.593994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.603630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.603655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.617016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.617056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.350 [2024-10-08 18:37:11.626926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.350 [2024-10-08 18:37:11.626950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.641499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.641525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.650822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.650847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.664881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.664905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.674346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.674372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.686201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.686226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.703069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.703093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.718232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.718258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.727769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.727807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.741858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.741891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.752370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.752397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.764180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.764206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.774701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.774724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.790972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.790996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.806391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.806429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.815823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.815850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 11497.33 IOPS, 89.82 MiB/s [2024-10-08T16:37:11.895Z] [2024-10-08 18:37:11.827936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.827960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.838688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.838711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.853263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.853311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.862480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.862507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.874377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.874402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.608 [2024-10-08 18:37:11.891050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.608 [2024-10-08 18:37:11.891089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:11.905786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:11.905812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:11.915642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:11.915665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:11.927867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:11.927907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:11.941298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:11.941325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:11.950685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:11.950710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:11.963098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:11.963123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:11.978790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:11.978831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:11.988823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:11.988847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.000812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.000836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.011865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.011891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.024661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.024688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.034337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.034378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.045964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.045988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.062443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.062469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.072051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.072077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.083949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.083973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.096220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.096246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.105992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.106018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.118148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.118173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.133845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.133868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:53.866 [2024-10-08 18:37:12.143069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:53.866 [2024-10-08 18:37:12.143093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.158382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.158409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.168125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.168162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.180302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.180340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.192874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.192900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.202820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.202859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.216733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.216773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.226547] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.226589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.238614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.238640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.254914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.254939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.264839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.264863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.276802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.276843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.287772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.287797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.303639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.303677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.319050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.319076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.334631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.334658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.344490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.344518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.356897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.356921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.367831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.367854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.382727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.382766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.400818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.400857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.124 [2024-10-08 18:37:12.410662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.124 [2024-10-08 18:37:12.410686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.422393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.422419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.438162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.438186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.447734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.447773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.462397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.462423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.472693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.472717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.483667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.483691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.498804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.498844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.508381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.508406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.520363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.520389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.531763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.531801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.546579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.546605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.556084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.556108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.568020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.568043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.583133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.583173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.600686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.600711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.610301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.610326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.622412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.622437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.637915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.637940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.647169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.647195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.382 [2024-10-08 18:37:12.661450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.382 [2024-10-08 18:37:12.661475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.670819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.670856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.682582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.682607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.698775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.698813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.708389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.708415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.720463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.720488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.731640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.731664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.745737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.745778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.755607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.755648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.770481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.770508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.784894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.784920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.794163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.794187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.809882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.809906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.819043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.819068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 11520.25 IOPS, 90.00 MiB/s [2024-10-08T16:37:12.928Z] [2024-10-08 18:37:12.833308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.833332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.842397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.842422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.854717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.854740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.870356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.870383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.879741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.879764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.893806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.893829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.903296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.903329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.641 [2024-10-08 18:37:12.918054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.641 [2024-10-08 18:37:12.918079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:12.934598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:12.934640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:12.951238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:12.951265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:12.967193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:12.967218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:12.982985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:12.983026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:12.993095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:12.993120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.005285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.005331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.016059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.016084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.026875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.026898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.042338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.042364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.051700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.051724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.066220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.066243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.076663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.076687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.088421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.088447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.099511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.099537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.112390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.112417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.121898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.121922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.138274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.138321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.147778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.147811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.159713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.159737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.172210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.172236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.182502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.182528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:54.929 [2024-10-08 18:37:13.194568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:54.929 [2024-10-08 18:37:13.194610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.210816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.210843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.227352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.227388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.242399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.242426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.251509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.251536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.263828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.263853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.275154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.275177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.289643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.289687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.299092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.299115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.313296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.313335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.323226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.323250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.337444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.337471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.347433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.347460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.362044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.362070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.371983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.372007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.384020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.384046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.394885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.394909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.410040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.410066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.419938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.419962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.431649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.431674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.441499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.441526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.453505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.453530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.464247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.464299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.474613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.474639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.488939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.488979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.214 [2024-10-08 18:37:13.498662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.214 [2024-10-08 18:37:13.498686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.472 [2024-10-08 18:37:13.510824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.472 [2024-10-08 18:37:13.510848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.472 [2024-10-08 18:37:13.527790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.472 [2024-10-08 18:37:13.527816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.472 [2024-10-08 18:37:13.542525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.472 [2024-10-08 18:37:13.542552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.472 [2024-10-08 18:37:13.551508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.472 [2024-10-08 18:37:13.551535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.472 [2024-10-08 18:37:13.567933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.472 [2024-10-08 18:37:13.567958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.472 [2024-10-08 18:37:13.577981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.472 [2024-10-08 18:37:13.578006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.472 [2024-10-08 18:37:13.594047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.472 [2024-10-08 18:37:13.594071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.472 [2024-10-08 18:37:13.603949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.472 [2024-10-08 18:37:13.603974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.472 [2024-10-08 18:37:13.616235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.472 [2024-10-08 18:37:13.616274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.627420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.627445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.640540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.640567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.650176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.650200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.662433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.662461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.679017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.679042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.694619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.694644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.704200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.704225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.716444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.716471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.727491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.727517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.742099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.742139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.473 [2024-10-08 18:37:13.751827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.473 [2024-10-08 18:37:13.751851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.763970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.763993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.774690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.774714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.787740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.787780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.801878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.801903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.811229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.811253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.825036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.825060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 11519.60 IOPS, 90.00 MiB/s [2024-10-08T16:37:14.018Z] [2024-10-08 18:37:13.834503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.834538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.842198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.842239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 00:29:55.731 Latency(us) 00:29:55.731 [2024-10-08T16:37:14.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.731 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:55.731 Nvme1n1 : 5.01 11521.78 90.01 0.00 0.00 11094.26 3058.35 18252.99 00:29:55.731 [2024-10-08T16:37:14.018Z] =================================================================================================================== 00:29:55.731 [2024-10-08T16:37:14.018Z] Total : 11521.78 90.01 0.00 0.00 11094.26 3058.35 18252.99 00:29:55.731 [2024-10-08 18:37:13.848606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.848642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.856603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.856640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.864588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.864610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.872638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.872681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.880641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.880685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.888634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.888674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.896633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.896674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.904631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.904673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.912638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.912681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.920638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.920681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.928637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.928678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.936633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.936674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.944641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.944683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.952638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.952680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.960639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.960699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.968632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.968674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.976630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.976670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.984633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.984674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:13.992627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:13.992674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:14.000603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:14.000625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:14.008602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:14.008622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.731 [2024-10-08 18:37:14.016605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.731 [2024-10-08 18:37:14.016626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.024587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.024608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.032636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.032676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.040633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.040674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.048648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.048683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.056603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.056623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.064599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.064619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.072600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.072621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.080586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.080607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.088642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.088681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.096636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.096675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.104604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.104625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 [2024-10-08 18:37:14.116601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:55.989 [2024-10-08 18:37:14.116646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:55.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1399436) - No such process 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1399436 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:55.989 delay0 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:55.989 18:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:55.989 [2024-10-08 18:37:14.267390] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:04.096 Initializing NVMe Controllers 00:30:04.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:04.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:04.096 Initialization complete. Launching workers. 00:30:04.096 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 225, failed: 23759 00:30:04.096 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23828, failed to submit 156 00:30:04.096 success 23770, unsuccessful 58, failed 0 00:30:04.096 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:30:04.096 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:30:04.096 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:04.096 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:30:04.096 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.096 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.097 rmmod nvme_tcp 00:30:04.097 rmmod nvme_fabrics 00:30:04.097 rmmod nvme_keyring 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1398107 ']' 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1398107 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1398107 ']' 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1398107 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1398107 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1398107' 00:30:04.097 killing process with pid 1398107 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1398107 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1398107 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.097 18:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.998 00:30:05.998 real 0m28.878s 00:30:05.998 user 0m41.190s 00:30:05.998 sys 0m9.940s 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:05.998 ************************************ 00:30:05.998 END TEST nvmf_zcopy 00:30:05.998 ************************************ 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:05.998 ************************************ 00:30:05.998 START TEST nvmf_nmic 00:30:05.998 ************************************ 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:05.998 * Looking for test storage... 00:30:05.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:30:05.998 18:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:05.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.998 --rc genhtml_branch_coverage=1 00:30:05.998 --rc genhtml_function_coverage=1 00:30:05.998 --rc genhtml_legend=1 00:30:05.998 --rc geninfo_all_blocks=1 00:30:05.998 --rc geninfo_unexecuted_blocks=1 00:30:05.998 00:30:05.998 ' 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:05.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.998 --rc genhtml_branch_coverage=1 00:30:05.998 --rc genhtml_function_coverage=1 00:30:05.998 --rc genhtml_legend=1 00:30:05.998 --rc geninfo_all_blocks=1 00:30:05.998 --rc geninfo_unexecuted_blocks=1 00:30:05.998 00:30:05.998 ' 00:30:05.998 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:05.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.998 --rc genhtml_branch_coverage=1 00:30:05.998 --rc genhtml_function_coverage=1 00:30:05.998 --rc genhtml_legend=1 00:30:05.998 --rc geninfo_all_blocks=1 00:30:05.998 --rc geninfo_unexecuted_blocks=1 00:30:05.998 00:30:05.998 ' 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:05.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.999 --rc genhtml_branch_coverage=1 00:30:05.999 --rc genhtml_function_coverage=1 00:30:05.999 --rc genhtml_legend=1 00:30:05.999 --rc geninfo_all_blocks=1 00:30:05.999 --rc geninfo_unexecuted_blocks=1 00:30:05.999 00:30:05.999 ' 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:30:05.999 18:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:08.529 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:08.529 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:08.529 Found net devices under 0000:09:00.0: cvl_0_0 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.529 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:08.529 Found net devices under 0000:09:00.1: cvl_0_1 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:08.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:30:08.530 00:30:08.530 --- 10.0.0.2 ping statistics --- 00:30:08.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.530 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:30:08.530 00:30:08.530 --- 10.0.0.1 ping statistics --- 00:30:08.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.530 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1402934 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1402934 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1402934 ']' 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.530 [2024-10-08 18:37:26.455775] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:08.530 [2024-10-08 18:37:26.456937] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:30:08.530 [2024-10-08 18:37:26.457020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.530 [2024-10-08 18:37:26.525366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:08.530 [2024-10-08 18:37:26.638022] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.530 [2024-10-08 18:37:26.638095] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.530 [2024-10-08 18:37:26.638123] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:08.530 [2024-10-08 18:37:26.638135] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:08.530 [2024-10-08 18:37:26.638146] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.530 [2024-10-08 18:37:26.639904] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.530 [2024-10-08 18:37:26.639968] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:08.530 [2024-10-08 18:37:26.640039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:08.530 [2024-10-08 18:37:26.640042] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.530 [2024-10-08 18:37:26.751921] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:08.530 [2024-10-08 18:37:26.752154] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:08.530 [2024-10-08 18:37:26.752413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:08.530 [2024-10-08 18:37:26.752984] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:08.530 [2024-10-08 18:37:26.753207] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.530 [2024-10-08 18:37:26.800725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.530 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.789 Malloc0 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.789 [2024-10-08 18:37:26.856981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:08.789 test case1: single bdev can't be used in multiple subsystems 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.789 [2024-10-08 18:37:26.880693] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:08.789 [2024-10-08 18:37:26.880722] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:08.789 [2024-10-08 18:37:26.880753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:08.789 request: 00:30:08.789 { 00:30:08.789 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:08.789 "namespace": { 00:30:08.789 "bdev_name": "Malloc0", 00:30:08.789 "no_auto_visible": false 00:30:08.789 }, 00:30:08.789 "method": "nvmf_subsystem_add_ns", 00:30:08.789 "req_id": 1 00:30:08.789 } 00:30:08.789 Got JSON-RPC error response 00:30:08.789 response: 00:30:08.789 { 00:30:08.789 "code": -32602, 00:30:08.789 "message": "Invalid parameters" 00:30:08.789 } 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:08.789 Adding namespace failed - expected result. 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:08.789 test case2: host connect to nvmf target in multiple paths 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:08.789 [2024-10-08 18:37:26.888787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:08.789 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.790 18:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:10.161 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:10.161 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:10.161 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:30:10.161 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:10.161 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:30:10.161 18:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:30:12.058 18:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:12.058 18:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:12.058 18:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:30:12.058 18:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:30:12.058 18:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:12.058 18:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:30:12.058 18:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:12.058 [global] 00:30:12.058 thread=1 00:30:12.058 invalidate=1 00:30:12.058 rw=write 00:30:12.058 time_based=1 00:30:12.058 runtime=1 00:30:12.058 ioengine=libaio 00:30:12.058 direct=1 00:30:12.058 bs=4096 00:30:12.058 iodepth=1 00:30:12.058 norandommap=0 00:30:12.058 numjobs=1 00:30:12.058 00:30:12.058 verify_dump=1 00:30:12.058 verify_backlog=512 00:30:12.058 verify_state_save=0 00:30:12.058 do_verify=1 00:30:12.058 verify=crc32c-intel 00:30:12.058 [job0] 00:30:12.058 filename=/dev/nvme0n1 00:30:12.058 Could not set queue depth (nvme0n1) 00:30:12.315 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:12.315 fio-3.35 00:30:12.315 Starting 1 thread 00:30:13.688 00:30:13.688 job0: (groupid=0, jobs=1): err= 0: pid=1403450: Tue Oct 8 18:37:31 2024 00:30:13.688 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:30:13.689 slat (nsec): min=5431, max=53427, avg=8813.78, stdev=3882.97 00:30:13.689 clat (usec): min=193, max=1491, avg=261.98, stdev=77.88 00:30:13.689 lat (usec): min=199, max=1505, avg=270.79, stdev=79.52 00:30:13.689 clat percentiles (usec): 00:30:13.689 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 225], 00:30:13.689 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:30:13.689 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 297], 95.00th=[ 424], 00:30:13.689 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 1172], 99.95th=[ 1221], 00:30:13.689 | 99.99th=[ 1500] 00:30:13.689 write: IOPS=2350, BW=9403KiB/s (9628kB/s)(9412KiB/1001msec); 0 zone resets 00:30:13.689 slat (usec): min=7, max=31411, avg=23.53, stdev=647.47 00:30:13.689 clat (usec): min=132, max=392, avg=160.86, stdev=18.95 00:30:13.689 lat (usec): min=140, max=31611, avg=184.39, stdev=648.57 00:30:13.689 clat percentiles (usec): 00:30:13.689 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:30:13.689 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:30:13.689 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 192], 00:30:13.689 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 326], 99.95th=[ 383], 00:30:13.689 | 99.99th=[ 392] 00:30:13.689 bw ( KiB/s): min= 8192, max= 8192, per=87.12%, avg=8192.00, stdev= 0.00, samples=1 00:30:13.689 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:13.689 lat (usec) : 250=80.21%, 500=19.59%, 750=0.05% 00:30:13.689 lat (msec) : 2=0.16% 00:30:13.689 cpu : usr=3.10%, sys=5.50%, ctx=4404, majf=0, minf=1 00:30:13.689 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:13.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:13.689 issued rwts: total=2048,2353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:13.689 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:13.689 00:30:13.689 Run status group 0 (all jobs): 00:30:13.689 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:30:13.689 WRITE: bw=9403KiB/s (9628kB/s), 9403KiB/s-9403KiB/s (9628kB/s-9628kB/s), io=9412KiB (9638kB), run=1001-1001msec 00:30:13.689 00:30:13.689 Disk stats (read/write): 00:30:13.689 nvme0n1: ios=1889/2048, merge=0/0, ticks=664/311, in_queue=975, util=98.90% 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:13.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.689 rmmod nvme_tcp 00:30:13.689 rmmod nvme_fabrics 00:30:13.689 rmmod nvme_keyring 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1402934 ']' 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1402934 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1402934 ']' 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1402934 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1402934 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1402934' 00:30:13.689 killing process with pid 1402934 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1402934 00:30:13.689 18:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1402934 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.947 18:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.476 00:30:16.476 real 0m10.378s 00:30:16.476 user 0m20.576s 00:30:16.476 sys 0m3.969s 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:16.476 ************************************ 00:30:16.476 END TEST nvmf_nmic 00:30:16.476 ************************************ 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:16.476 ************************************ 00:30:16.476 START TEST nvmf_fio_target 00:30:16.476 ************************************ 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:16.476 * Looking for test storage... 00:30:16.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:16.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.476 --rc genhtml_branch_coverage=1 00:30:16.476 --rc genhtml_function_coverage=1 00:30:16.476 --rc genhtml_legend=1 00:30:16.476 --rc geninfo_all_blocks=1 00:30:16.476 --rc geninfo_unexecuted_blocks=1 00:30:16.476 00:30:16.476 ' 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:16.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.476 --rc genhtml_branch_coverage=1 00:30:16.476 --rc genhtml_function_coverage=1 00:30:16.476 --rc genhtml_legend=1 00:30:16.476 --rc geninfo_all_blocks=1 00:30:16.476 --rc geninfo_unexecuted_blocks=1 00:30:16.476 00:30:16.476 ' 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:16.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.476 --rc genhtml_branch_coverage=1 00:30:16.476 --rc genhtml_function_coverage=1 00:30:16.476 --rc genhtml_legend=1 00:30:16.476 --rc geninfo_all_blocks=1 00:30:16.476 --rc geninfo_unexecuted_blocks=1 00:30:16.476 00:30:16.476 ' 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:16.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.476 --rc genhtml_branch_coverage=1 00:30:16.476 --rc genhtml_function_coverage=1 00:30:16.476 --rc genhtml_legend=1 00:30:16.476 --rc geninfo_all_blocks=1 00:30:16.476 --rc geninfo_unexecuted_blocks=1 00:30:16.476 00:30:16.476 ' 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.476 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:30:16.477 18:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:18.376 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:18.376 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.376 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:18.377 Found net devices under 0000:09:00.0: cvl_0_0 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:18.377 Found net devices under 0000:09:00.1: cvl_0_1 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.377 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:30:18.635 00:30:18.635 --- 10.0.0.2 ping statistics --- 00:30:18.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.635 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:30:18.635 00:30:18.635 --- 10.0.0.1 ping statistics --- 00:30:18.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.635 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1405647 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1405647 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1405647 ']' 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:18.635 18:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:18.635 [2024-10-08 18:37:36.834408] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:18.635 [2024-10-08 18:37:36.835634] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:30:18.635 [2024-10-08 18:37:36.835697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.635 [2024-10-08 18:37:36.900742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:18.894 [2024-10-08 18:37:37.018056] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.894 [2024-10-08 18:37:37.018114] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.894 [2024-10-08 18:37:37.018128] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.894 [2024-10-08 18:37:37.018140] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.894 [2024-10-08 18:37:37.018149] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.894 [2024-10-08 18:37:37.019993] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.894 [2024-10-08 18:37:37.020059] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:18.894 [2024-10-08 18:37:37.020125] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:18.894 [2024-10-08 18:37:37.020128] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.894 [2024-10-08 18:37:37.130852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:18.894 [2024-10-08 18:37:37.131071] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:18.894 [2024-10-08 18:37:37.131380] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:18.894 [2024-10-08 18:37:37.131968] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:18.894 [2024-10-08 18:37:37.132191] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:18.894 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:18.894 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:30:18.894 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:18.894 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:18.894 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:18.894 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.894 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:19.459 [2024-10-08 18:37:37.480909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.459 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:19.717 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:19.717 18:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:19.974 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:19.974 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:20.231 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:20.231 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:20.795 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:20.795 18:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:20.795 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:21.360 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:21.360 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:21.360 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:21.360 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:21.927 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:21.927 18:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:22.185 18:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:22.442 18:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:22.442 18:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:22.698 18:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:22.699 18:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:22.955 18:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.212 [2024-10-08 18:37:41.365063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.212 18:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:23.469 18:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:23.727 18:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:23.985 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:30:23.985 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:30:23.985 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:30:23.985 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:30:23.985 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:30:23.985 18:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:30:25.883 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:30:25.883 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:30:25.883 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:30:26.140 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:30:26.140 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:30:26.140 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:30:26.140 18:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:26.140 [global] 00:30:26.140 thread=1 00:30:26.140 invalidate=1 00:30:26.140 rw=write 00:30:26.140 time_based=1 00:30:26.140 runtime=1 00:30:26.140 ioengine=libaio 00:30:26.140 direct=1 00:30:26.140 bs=4096 00:30:26.140 iodepth=1 00:30:26.140 norandommap=0 00:30:26.140 numjobs=1 00:30:26.140 00:30:26.140 verify_dump=1 00:30:26.140 verify_backlog=512 00:30:26.140 verify_state_save=0 00:30:26.140 do_verify=1 00:30:26.140 verify=crc32c-intel 00:30:26.140 [job0] 00:30:26.140 filename=/dev/nvme0n1 00:30:26.140 [job1] 00:30:26.140 filename=/dev/nvme0n2 00:30:26.140 [job2] 00:30:26.140 filename=/dev/nvme0n3 00:30:26.140 [job3] 00:30:26.140 filename=/dev/nvme0n4 00:30:26.140 Could not set queue depth (nvme0n1) 00:30:26.140 Could not set queue depth (nvme0n2) 00:30:26.140 Could not set queue depth (nvme0n3) 00:30:26.140 Could not set queue depth (nvme0n4) 00:30:26.140 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.140 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.140 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.140 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:26.140 fio-3.35 00:30:26.140 Starting 4 threads 00:30:27.513 00:30:27.513 job0: (groupid=0, jobs=1): err= 0: pid=1406707: Tue Oct 8 18:37:45 2024 00:30:27.513 read: IOPS=2015, BW=8063KiB/s (8256kB/s)(8200KiB/1017msec) 00:30:27.513 slat (nsec): min=4807, max=47562, avg=9709.23, stdev=5714.34 00:30:27.513 clat (usec): min=198, max=40932, avg=261.39, stdev=1270.58 00:30:27.513 lat (usec): min=204, max=40947, avg=271.10, stdev=1270.69 00:30:27.513 clat percentiles (usec): 00:30:27.513 | 1.00th=[ 202], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 208], 00:30:27.513 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 219], 00:30:27.513 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 245], 95.00th=[ 265], 00:30:27.513 | 99.00th=[ 326], 99.50th=[ 343], 99.90th=[ 396], 99.95th=[40633], 00:30:27.513 | 99.99th=[41157] 00:30:27.513 write: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(10.0MiB/1017msec); 0 zone resets 00:30:27.513 slat (nsec): min=6423, max=47151, avg=12275.46, stdev=5309.79 00:30:27.513 clat (usec): min=137, max=1322, avg=162.19, stdev=45.41 00:30:27.513 lat (usec): min=145, max=1335, avg=174.47, stdev=46.57 00:30:27.513 clat percentiles (usec): 00:30:27.513 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 145], 20.00th=[ 147], 00:30:27.513 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:30:27.513 | 70.00th=[ 161], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 194], 00:30:27.513 | 99.00th=[ 227], 99.50th=[ 243], 99.90th=[ 1205], 99.95th=[ 1270], 00:30:27.513 | 99.99th=[ 1319] 00:30:27.513 bw ( KiB/s): min=10200, max=10280, per=64.50%, avg=10240.00, stdev=56.57, samples=2 00:30:27.513 iops : min= 2550, max= 2570, avg=2560.00, stdev=14.14, samples=2 00:30:27.513 lat (usec) : 250=96.42%, 500=3.38%, 750=0.09% 00:30:27.513 lat (msec) : 2=0.07%, 50=0.04% 00:30:27.513 cpu : usr=3.05%, sys=4.72%, ctx=4611, majf=0, minf=1 00:30:27.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.513 issued rwts: total=2050,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:27.514 job1: (groupid=0, jobs=1): err= 0: pid=1406717: Tue Oct 8 18:37:45 2024 00:30:27.514 read: IOPS=20, BW=83.1KiB/s (85.1kB/s)(84.0KiB/1011msec) 00:30:27.514 slat (nsec): min=15406, max=34247, avg=22024.67, stdev=8703.71 00:30:27.514 clat (usec): min=40900, max=41330, avg=40989.53, stdev=83.76 00:30:27.514 lat (usec): min=40934, max=41347, avg=41011.56, stdev=81.39 00:30:27.514 clat percentiles (usec): 00:30:27.514 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:27.514 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:27.514 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:27.514 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:27.514 | 99.99th=[41157] 00:30:27.514 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:30:27.514 slat (usec): min=6, max=18075, avg=50.57, stdev=798.20 00:30:27.514 clat (usec): min=154, max=559, avg=237.13, stdev=24.58 00:30:27.514 lat (usec): min=187, max=18278, avg=287.70, stdev=796.95 00:30:27.514 clat percentiles (usec): 00:30:27.514 | 1.00th=[ 188], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 225], 00:30:27.514 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 235], 60.00th=[ 239], 00:30:27.514 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:30:27.514 | 99.00th=[ 310], 99.50th=[ 355], 99.90th=[ 562], 99.95th=[ 562], 00:30:27.514 | 99.99th=[ 562] 00:30:27.514 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:30:27.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:27.514 lat (usec) : 250=76.36%, 500=19.51%, 750=0.19% 00:30:27.514 lat (msec) : 50=3.94% 00:30:27.514 cpu : usr=0.79%, sys=0.30%, ctx=535, majf=0, minf=1 00:30:27.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.514 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:27.514 job2: (groupid=0, jobs=1): err= 0: pid=1406718: Tue Oct 8 18:37:45 2024 00:30:27.514 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:30:27.514 slat (nsec): min=15224, max=34123, avg=20235.91, stdev=7639.74 00:30:27.514 clat (usec): min=40908, max=41255, avg=40984.79, stdev=71.52 00:30:27.514 lat (usec): min=40942, max=41270, avg=41005.03, stdev=69.17 00:30:27.514 clat percentiles (usec): 00:30:27.514 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:27.514 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:27.514 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:27.514 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:27.514 | 99.99th=[41157] 00:30:27.514 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:30:27.514 slat (nsec): min=7716, max=48278, avg=15732.85, stdev=6662.27 00:30:27.514 clat (usec): min=170, max=440, avg=232.89, stdev=22.72 00:30:27.514 lat (usec): min=186, max=470, avg=248.62, stdev=22.00 00:30:27.514 clat percentiles (usec): 00:30:27.514 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 208], 20.00th=[ 217], 00:30:27.514 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:30:27.514 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 265], 00:30:27.514 | 99.00th=[ 285], 99.50th=[ 314], 99.90th=[ 441], 99.95th=[ 441], 00:30:27.514 | 99.99th=[ 441] 00:30:27.514 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:30:27.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:27.514 lat (usec) : 250=79.40%, 500=16.48% 00:30:27.514 lat (msec) : 50=4.12% 00:30:27.514 cpu : usr=0.39%, sys=0.78%, ctx=535, majf=0, minf=1 00:30:27.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.514 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:27.514 job3: (groupid=0, jobs=1): err= 0: pid=1406719: Tue Oct 8 18:37:45 2024 00:30:27.514 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:30:27.514 slat (nsec): min=15374, max=36743, avg=20655.73, stdev=7572.82 00:30:27.514 clat (usec): min=40625, max=41010, avg=40956.84, stdev=78.43 00:30:27.514 lat (usec): min=40649, max=41027, avg=40977.50, stdev=76.16 00:30:27.514 clat percentiles (usec): 00:30:27.514 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:27.514 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:27.514 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:27.514 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:27.514 | 99.99th=[41157] 00:30:27.514 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:30:27.514 slat (usec): min=10, max=370, avg=21.42, stdev=17.39 00:30:27.514 clat (usec): min=174, max=507, avg=213.76, stdev=30.78 00:30:27.514 lat (usec): min=188, max=603, avg=235.17, stdev=36.96 00:30:27.514 clat percentiles (usec): 00:30:27.514 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 198], 00:30:27.514 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 215], 00:30:27.514 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 239], 00:30:27.514 | 99.00th=[ 343], 99.50th=[ 502], 99.90th=[ 510], 99.95th=[ 510], 00:30:27.514 | 99.99th=[ 510] 00:30:27.514 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:30:27.514 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:27.514 lat (usec) : 250=93.07%, 500=2.43%, 750=0.37% 00:30:27.514 lat (msec) : 50=4.12% 00:30:27.514 cpu : usr=0.59%, sys=1.37%, ctx=537, majf=0, minf=1 00:30:27.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:27.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.514 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:27.514 00:30:27.514 Run status group 0 (all jobs): 00:30:27.514 READ: bw=8198KiB/s (8394kB/s), 83.1KiB/s-8063KiB/s (85.1kB/s-8256kB/s), io=8460KiB (8663kB), run=1011-1032msec 00:30:27.514 WRITE: bw=15.5MiB/s (16.3MB/s), 1984KiB/s-9.83MiB/s (2032kB/s-10.3MB/s), io=16.0MiB (16.8MB), run=1011-1032msec 00:30:27.514 00:30:27.514 Disk stats (read/write): 00:30:27.514 nvme0n1: ios=2083/2048, merge=0/0, ticks=445/330, in_queue=775, util=86.77% 00:30:27.514 nvme0n2: ios=59/512, merge=0/0, ticks=841/119, in_queue=960, util=90.84% 00:30:27.514 nvme0n3: ios=74/512, merge=0/0, ticks=773/104, in_queue=877, util=95.51% 00:30:27.514 nvme0n4: ios=77/512, merge=0/0, ticks=864/104, in_queue=968, util=95.48% 00:30:27.514 18:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:30:27.514 [global] 00:30:27.514 thread=1 00:30:27.514 invalidate=1 00:30:27.514 rw=randwrite 00:30:27.514 time_based=1 00:30:27.514 runtime=1 00:30:27.514 ioengine=libaio 00:30:27.514 direct=1 00:30:27.514 bs=4096 00:30:27.514 iodepth=1 00:30:27.514 norandommap=0 00:30:27.514 numjobs=1 00:30:27.514 00:30:27.514 verify_dump=1 00:30:27.514 verify_backlog=512 00:30:27.514 verify_state_save=0 00:30:27.514 do_verify=1 00:30:27.514 verify=crc32c-intel 00:30:27.514 [job0] 00:30:27.514 filename=/dev/nvme0n1 00:30:27.514 [job1] 00:30:27.514 filename=/dev/nvme0n2 00:30:27.514 [job2] 00:30:27.514 filename=/dev/nvme0n3 00:30:27.514 [job3] 00:30:27.514 filename=/dev/nvme0n4 00:30:27.514 Could not set queue depth (nvme0n1) 00:30:27.514 Could not set queue depth (nvme0n2) 00:30:27.514 Could not set queue depth (nvme0n3) 00:30:27.514 Could not set queue depth (nvme0n4) 00:30:27.772 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:27.772 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:27.772 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:27.772 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:27.772 fio-3.35 00:30:27.772 Starting 4 threads 00:30:29.160 00:30:29.160 job0: (groupid=0, jobs=1): err= 0: pid=1406944: Tue Oct 8 18:37:47 2024 00:30:29.160 read: IOPS=515, BW=2063KiB/s (2112kB/s)(2104KiB/1020msec) 00:30:29.160 slat (nsec): min=5257, max=33806, avg=12613.14, stdev=5984.02 00:30:29.160 clat (usec): min=188, max=41137, avg=1412.81, stdev=6788.33 00:30:29.160 lat (usec): min=200, max=41151, avg=1425.42, stdev=6789.17 00:30:29.160 clat percentiles (usec): 00:30:29.160 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 229], 00:30:29.160 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:30:29.160 | 70.00th=[ 258], 80.00th=[ 277], 90.00th=[ 310], 95.00th=[ 334], 00:30:29.160 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:29.160 | 99.99th=[41157] 00:30:29.160 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:30:29.160 slat (usec): min=6, max=23871, avg=40.21, stdev=745.49 00:30:29.160 clat (usec): min=148, max=540, avg=216.72, stdev=42.31 00:30:29.160 lat (usec): min=155, max=24136, avg=256.93, stdev=748.13 00:30:29.160 clat percentiles (usec): 00:30:29.160 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 192], 00:30:29.160 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 219], 00:30:29.160 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 277], 00:30:29.160 | 99.00th=[ 400], 99.50th=[ 429], 99.90th=[ 469], 99.95th=[ 537], 00:30:29.160 | 99.99th=[ 537] 00:30:29.160 bw ( KiB/s): min= 8192, max= 8192, per=58.95%, avg=8192.00, stdev= 0.00, samples=1 00:30:29.160 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:30:29.160 lat (usec) : 250=81.68%, 500=17.29%, 750=0.06% 00:30:29.160 lat (msec) : 50=0.97% 00:30:29.160 cpu : usr=1.08%, sys=2.85%, ctx=1552, majf=0, minf=1 00:30:29.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.160 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:29.160 job1: (groupid=0, jobs=1): err= 0: pid=1406945: Tue Oct 8 18:37:47 2024 00:30:29.160 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:30:29.160 slat (nsec): min=6790, max=48753, avg=13311.41, stdev=5776.31 00:30:29.160 clat (usec): min=216, max=41084, avg=381.27, stdev=2077.47 00:30:29.160 lat (usec): min=223, max=41091, avg=394.58, stdev=2077.58 00:30:29.160 clat percentiles (usec): 00:30:29.160 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 239], 00:30:29.160 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:30:29.160 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 347], 95.00th=[ 388], 00:30:29.160 | 99.00th=[ 570], 99.50th=[ 619], 99.90th=[41157], 99.95th=[41157], 00:30:29.160 | 99.99th=[41157] 00:30:29.160 write: IOPS=1539, BW=6158KiB/s (6306kB/s)(6164KiB/1001msec); 0 zone resets 00:30:29.160 slat (usec): min=7, max=18304, avg=31.19, stdev=465.85 00:30:29.160 clat (usec): min=147, max=1726, avg=215.02, stdev=73.58 00:30:29.160 lat (usec): min=157, max=18562, avg=246.21, stdev=472.95 00:30:29.160 clat percentiles (usec): 00:30:29.160 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 178], 00:30:29.160 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 200], 60.00th=[ 208], 00:30:29.160 | 70.00th=[ 223], 80.00th=[ 251], 90.00th=[ 273], 95.00th=[ 289], 00:30:29.160 | 99.00th=[ 441], 99.50th=[ 537], 99.90th=[ 1303], 99.95th=[ 1729], 00:30:29.160 | 99.99th=[ 1729] 00:30:29.160 bw ( KiB/s): min= 4480, max= 4480, per=32.24%, avg=4480.00, stdev= 0.00, samples=1 00:30:29.160 iops : min= 1120, max= 1120, avg=1120.00, stdev= 0.00, samples=1 00:30:29.160 lat (usec) : 250=63.73%, 500=35.13%, 750=0.88%, 1000=0.06% 00:30:29.160 lat (msec) : 2=0.06%, 50=0.13% 00:30:29.160 cpu : usr=3.80%, sys=6.60%, ctx=3080, majf=0, minf=1 00:30:29.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.161 issued rwts: total=1536,1541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:29.161 job2: (groupid=0, jobs=1): err= 0: pid=1406946: Tue Oct 8 18:37:47 2024 00:30:29.161 read: IOPS=35, BW=143KiB/s (147kB/s)(148KiB/1033msec) 00:30:29.161 slat (nsec): min=13546, max=35379, avg=22100.73, stdev=8131.07 00:30:29.161 clat (usec): min=256, max=41037, avg=24421.55, stdev=20186.72 00:30:29.161 lat (usec): min=280, max=41054, avg=24443.65, stdev=20188.42 00:30:29.161 clat percentiles (usec): 00:30:29.161 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:30:29.161 | 30.00th=[ 326], 40.00th=[ 494], 50.00th=[40633], 60.00th=[40633], 00:30:29.161 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:29.161 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:29.161 | 99.99th=[41157] 00:30:29.161 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:30:29.161 slat (nsec): min=6290, max=46594, avg=15574.04, stdev=5795.65 00:30:29.161 clat (usec): min=186, max=470, avg=230.34, stdev=32.91 00:30:29.161 lat (usec): min=202, max=481, avg=245.91, stdev=32.02 00:30:29.161 clat percentiles (usec): 00:30:29.161 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:30:29.161 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 231], 00:30:29.161 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 277], 00:30:29.161 | 99.00th=[ 375], 99.50th=[ 424], 99.90th=[ 469], 99.95th=[ 469], 00:30:29.161 | 99.99th=[ 469] 00:30:29.161 bw ( KiB/s): min= 4096, max= 4096, per=29.47%, avg=4096.00, stdev= 0.00, samples=1 00:30:29.161 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:29.161 lat (usec) : 250=75.23%, 500=20.77% 00:30:29.161 lat (msec) : 50=4.01% 00:30:29.161 cpu : usr=0.29%, sys=0.97%, ctx=550, majf=0, minf=1 00:30:29.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.161 issued rwts: total=37,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:29.161 job3: (groupid=0, jobs=1): err= 0: pid=1406947: Tue Oct 8 18:37:47 2024 00:30:29.161 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:30:29.161 slat (nsec): min=15237, max=50047, avg=23433.45, stdev=9156.17 00:30:29.161 clat (usec): min=40567, max=41159, avg=40950.73, stdev=117.14 00:30:29.161 lat (usec): min=40583, max=41177, avg=40974.16, stdev=115.62 00:30:29.161 clat percentiles (usec): 00:30:29.161 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:29.161 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:29.161 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:29.161 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:29.161 | 99.99th=[41157] 00:30:29.161 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:30:29.161 slat (nsec): min=6093, max=43804, avg=14913.96, stdev=5834.96 00:30:29.161 clat (usec): min=171, max=425, avg=231.41, stdev=26.03 00:30:29.161 lat (usec): min=181, max=432, avg=246.32, stdev=24.05 00:30:29.161 clat percentiles (usec): 00:30:29.161 | 1.00th=[ 182], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:30:29.161 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:30:29.161 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 277], 00:30:29.161 | 99.00th=[ 306], 99.50th=[ 347], 99.90th=[ 424], 99.95th=[ 424], 00:30:29.161 | 99.99th=[ 424] 00:30:29.161 bw ( KiB/s): min= 4096, max= 4096, per=29.47%, avg=4096.00, stdev= 0.00, samples=1 00:30:29.161 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:30:29.161 lat (usec) : 250=77.53%, 500=18.35% 00:30:29.161 lat (msec) : 50=4.12% 00:30:29.161 cpu : usr=0.10%, sys=0.97%, ctx=534, majf=0, minf=1 00:30:29.161 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.161 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.161 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:29.161 00:30:29.161 Run status group 0 (all jobs): 00:30:29.161 READ: bw=8213KiB/s (8410kB/s), 85.4KiB/s-6138KiB/s (87.5kB/s-6285kB/s), io=8484KiB (8688kB), run=1001-1033msec 00:30:29.161 WRITE: bw=13.6MiB/s (14.2MB/s), 1983KiB/s-6158KiB/s (2030kB/s-6306kB/s), io=14.0MiB (14.7MB), run=1001-1033msec 00:30:29.161 00:30:29.161 Disk stats (read/write): 00:30:29.161 nvme0n1: ios=545/1024, merge=0/0, ticks=1522/214, in_queue=1736, util=98.00% 00:30:29.161 nvme0n2: ios=1050/1536, merge=0/0, ticks=1414/299, in_queue=1713, util=97.26% 00:30:29.161 nvme0n3: ios=82/512, merge=0/0, ticks=891/114, in_queue=1005, util=97.92% 00:30:29.161 nvme0n4: ios=17/512, merge=0/0, ticks=697/110, in_queue=807, util=89.74% 00:30:29.161 18:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:30:29.161 [global] 00:30:29.161 thread=1 00:30:29.161 invalidate=1 00:30:29.161 rw=write 00:30:29.161 time_based=1 00:30:29.161 runtime=1 00:30:29.161 ioengine=libaio 00:30:29.161 direct=1 00:30:29.161 bs=4096 00:30:29.161 iodepth=128 00:30:29.161 norandommap=0 00:30:29.161 numjobs=1 00:30:29.161 00:30:29.161 verify_dump=1 00:30:29.161 verify_backlog=512 00:30:29.161 verify_state_save=0 00:30:29.161 do_verify=1 00:30:29.161 verify=crc32c-intel 00:30:29.161 [job0] 00:30:29.161 filename=/dev/nvme0n1 00:30:29.161 [job1] 00:30:29.161 filename=/dev/nvme0n2 00:30:29.161 [job2] 00:30:29.161 filename=/dev/nvme0n3 00:30:29.161 [job3] 00:30:29.161 filename=/dev/nvme0n4 00:30:29.161 Could not set queue depth (nvme0n1) 00:30:29.161 Could not set queue depth (nvme0n2) 00:30:29.161 Could not set queue depth (nvme0n3) 00:30:29.161 Could not set queue depth (nvme0n4) 00:30:29.161 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.161 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.161 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.161 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:29.161 fio-3.35 00:30:29.161 Starting 4 threads 00:30:30.534 00:30:30.534 job0: (groupid=0, jobs=1): err= 0: pid=1407173: Tue Oct 8 18:37:48 2024 00:30:30.534 read: IOPS=5204, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1008msec) 00:30:30.534 slat (usec): min=2, max=14831, avg=93.11, stdev=732.34 00:30:30.534 clat (usec): min=1081, max=28525, avg=11967.73, stdev=3286.68 00:30:30.534 lat (usec): min=3704, max=33382, avg=12060.84, stdev=3342.50 00:30:30.534 clat percentiles (usec): 00:30:30.534 | 1.00th=[ 5407], 5.00th=[ 8225], 10.00th=[ 9241], 20.00th=[ 9765], 00:30:30.534 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10945], 60.00th=[11994], 00:30:30.534 | 70.00th=[12518], 80.00th=[13829], 90.00th=[17171], 95.00th=[19268], 00:30:30.534 | 99.00th=[21627], 99.50th=[21627], 99.90th=[27395], 99.95th=[27395], 00:30:30.534 | 99.99th=[28443] 00:30:30.534 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:30:30.534 slat (usec): min=4, max=39549, avg=86.17, stdev=795.82 00:30:30.534 clat (usec): min=1537, max=41059, avg=10664.77, stdev=2479.96 00:30:30.534 lat (usec): min=1933, max=41091, avg=10750.94, stdev=2546.42 00:30:30.534 clat percentiles (usec): 00:30:30.534 | 1.00th=[ 3458], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 9110], 00:30:30.534 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11338], 60.00th=[11600], 00:30:30.534 | 70.00th=[11731], 80.00th=[11994], 90.00th=[13042], 95.00th=[14091], 00:30:30.534 | 99.00th=[16909], 99.50th=[16909], 99.90th=[21627], 99.95th=[21627], 00:30:30.534 | 99.99th=[41157] 00:30:30.534 bw ( KiB/s): min=20480, max=24560, per=32.06%, avg=22520.00, stdev=2885.00, samples=2 00:30:30.534 iops : min= 5120, max= 6140, avg=5630.00, stdev=721.25, samples=2 00:30:30.534 lat (msec) : 2=0.05%, 4=0.73%, 10=27.73%, 20=69.85%, 50=1.65% 00:30:30.534 cpu : usr=4.27%, sys=6.45%, ctx=420, majf=0, minf=1 00:30:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:30:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.534 issued rwts: total=5246,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.534 job1: (groupid=0, jobs=1): err= 0: pid=1407174: Tue Oct 8 18:37:48 2024 00:30:30.534 read: IOPS=5513, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1003msec) 00:30:30.534 slat (nsec): min=1931, max=11426k, avg=88945.92, stdev=595626.03 00:30:30.534 clat (usec): min=806, max=19428, avg=11500.74, stdev=2172.46 00:30:30.534 lat (usec): min=5057, max=19443, avg=11589.68, stdev=2198.23 00:30:30.534 clat percentiles (usec): 00:30:30.534 | 1.00th=[ 6128], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10028], 00:30:30.534 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:30:30.534 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14484], 95.00th=[15139], 00:30:30.534 | 99.00th=[18220], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:30:30.534 | 99.99th=[19530] 00:30:30.534 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:30:30.534 slat (usec): min=2, max=8404, avg=83.84, stdev=514.51 00:30:30.534 clat (usec): min=1263, max=20880, avg=11306.94, stdev=1560.07 00:30:30.534 lat (usec): min=1290, max=20893, avg=11390.78, stdev=1625.93 00:30:30.534 clat percentiles (usec): 00:30:30.534 | 1.00th=[ 7046], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[10421], 00:30:30.534 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:30:30.534 | 70.00th=[11731], 80.00th=[12125], 90.00th=[13304], 95.00th=[13960], 00:30:30.534 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17957], 99.95th=[18482], 00:30:30.534 | 99.99th=[20841] 00:30:30.534 bw ( KiB/s): min=22224, max=22832, per=32.07%, avg=22528.00, stdev=429.92, samples=2 00:30:30.534 iops : min= 5556, max= 5708, avg=5632.00, stdev=107.48, samples=2 00:30:30.534 lat (usec) : 1000=0.01% 00:30:30.534 lat (msec) : 2=0.01%, 10=14.75%, 20=85.23%, 50=0.01% 00:30:30.534 cpu : usr=4.89%, sys=5.29%, ctx=483, majf=0, minf=1 00:30:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:30:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.534 issued rwts: total=5530,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.534 job2: (groupid=0, jobs=1): err= 0: pid=1407175: Tue Oct 8 18:37:48 2024 00:30:30.534 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:30:30.534 slat (usec): min=2, max=15538, avg=143.63, stdev=1090.04 00:30:30.534 clat (usec): min=10001, max=57182, avg=17812.09, stdev=6984.08 00:30:30.534 lat (usec): min=10007, max=57190, avg=17955.72, stdev=7094.67 00:30:30.534 clat percentiles (usec): 00:30:30.534 | 1.00th=[10028], 5.00th=[11863], 10.00th=[13042], 20.00th=[14091], 00:30:30.534 | 30.00th=[14353], 40.00th=[15270], 50.00th=[16319], 60.00th=[16909], 00:30:30.534 | 70.00th=[17957], 80.00th=[19530], 90.00th=[23200], 95.00th=[31851], 00:30:30.534 | 99.00th=[50594], 99.50th=[53216], 99.90th=[57410], 99.95th=[57410], 00:30:30.534 | 99.99th=[57410] 00:30:30.534 write: IOPS=3346, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1006msec); 0 zone resets 00:30:30.534 slat (usec): min=3, max=11991, avg=156.57, stdev=900.29 00:30:30.534 clat (usec): min=824, max=57178, avg=21582.67, stdev=12829.33 00:30:30.534 lat (usec): min=831, max=57190, avg=21739.23, stdev=12923.34 00:30:30.534 clat percentiles (usec): 00:30:30.534 | 1.00th=[ 7373], 5.00th=[11076], 10.00th=[11731], 20.00th=[12387], 00:30:30.534 | 30.00th=[13698], 40.00th=[14484], 50.00th=[14877], 60.00th=[15401], 00:30:30.534 | 70.00th=[20055], 80.00th=[42206], 90.00th=[43779], 95.00th=[44827], 00:30:30.534 | 99.00th=[45351], 99.50th=[45351], 99.90th=[54789], 99.95th=[57410], 00:30:30.534 | 99.99th=[57410] 00:30:30.534 bw ( KiB/s): min=10384, max=15528, per=18.44%, avg=12956.00, stdev=3637.36, samples=2 00:30:30.534 iops : min= 2596, max= 3882, avg=3239.00, stdev=909.34, samples=2 00:30:30.534 lat (usec) : 1000=0.03% 00:30:30.534 lat (msec) : 4=0.02%, 10=1.77%, 20=74.06%, 50=23.53%, 100=0.59% 00:30:30.534 cpu : usr=2.09%, sys=4.08%, ctx=265, majf=0, minf=1 00:30:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:30:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.534 issued rwts: total=3072,3367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.534 job3: (groupid=0, jobs=1): err= 0: pid=1407176: Tue Oct 8 18:37:48 2024 00:30:30.534 read: IOPS=3006, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1007msec) 00:30:30.534 slat (usec): min=3, max=27152, avg=178.65, stdev=1262.22 00:30:30.534 clat (usec): min=748, max=81697, avg=23139.79, stdev=15249.21 00:30:30.534 lat (usec): min=10245, max=81704, avg=23318.44, stdev=15356.38 00:30:30.534 clat percentiles (usec): 00:30:30.534 | 1.00th=[11863], 5.00th=[13042], 10.00th=[13435], 20.00th=[13960], 00:30:30.534 | 30.00th=[14615], 40.00th=[16188], 50.00th=[16581], 60.00th=[16909], 00:30:30.534 | 70.00th=[17433], 80.00th=[34341], 90.00th=[47973], 95.00th=[57410], 00:30:30.534 | 99.00th=[80217], 99.50th=[80217], 99.90th=[81265], 99.95th=[81265], 00:30:30.534 | 99.99th=[81265] 00:30:30.534 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:30:30.534 slat (usec): min=4, max=26759, avg=144.10, stdev=1049.44 00:30:30.534 clat (usec): min=4808, max=62584, avg=18512.91, stdev=9025.66 00:30:30.534 lat (usec): min=4815, max=62599, avg=18657.01, stdev=9126.31 00:30:30.534 clat percentiles (usec): 00:30:30.534 | 1.00th=[ 5211], 5.00th=[12911], 10.00th=[13435], 20.00th=[13566], 00:30:30.534 | 30.00th=[13829], 40.00th=[15664], 50.00th=[16319], 60.00th=[16450], 00:30:30.534 | 70.00th=[16712], 80.00th=[18220], 90.00th=[32113], 95.00th=[39584], 00:30:30.534 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[55837], 00:30:30.534 | 99.99th=[62653] 00:30:30.534 bw ( KiB/s): min= 8192, max=16384, per=17.49%, avg=12288.00, stdev=5792.62, samples=2 00:30:30.534 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:30:30.534 lat (usec) : 750=0.02% 00:30:30.534 lat (msec) : 10=1.61%, 20=76.20%, 50=16.92%, 100=5.26% 00:30:30.534 cpu : usr=1.69%, sys=5.07%, ctx=188, majf=0, minf=1 00:30:30.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:30:30.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:30.534 issued rwts: total=3028,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:30.534 00:30:30.535 Run status group 0 (all jobs): 00:30:30.535 READ: bw=65.4MiB/s (68.6MB/s), 11.7MiB/s-21.5MiB/s (12.3MB/s-22.6MB/s), io=65.9MiB (69.1MB), run=1003-1008msec 00:30:30.535 WRITE: bw=68.6MiB/s (71.9MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=69.2MiB (72.5MB), run=1003-1008msec 00:30:30.535 00:30:30.535 Disk stats (read/write): 00:30:30.535 nvme0n1: ios=4513/4608, merge=0/0, ticks=52696/47801, in_queue=100497, util=94.29% 00:30:30.535 nvme0n2: ios=4658/4841, merge=0/0, ticks=27732/27953, in_queue=55685, util=91.58% 00:30:30.535 nvme0n3: ios=2597/2566, merge=0/0, ticks=41333/54768, in_queue=96101, util=99.58% 00:30:30.535 nvme0n4: ios=2743/3072, merge=0/0, ticks=17463/19238, in_queue=36701, util=99.69% 00:30:30.535 18:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:30:30.535 [global] 00:30:30.535 thread=1 00:30:30.535 invalidate=1 00:30:30.535 rw=randwrite 00:30:30.535 time_based=1 00:30:30.535 runtime=1 00:30:30.535 ioengine=libaio 00:30:30.535 direct=1 00:30:30.535 bs=4096 00:30:30.535 iodepth=128 00:30:30.535 norandommap=0 00:30:30.535 numjobs=1 00:30:30.535 00:30:30.535 verify_dump=1 00:30:30.535 verify_backlog=512 00:30:30.535 verify_state_save=0 00:30:30.535 do_verify=1 00:30:30.535 verify=crc32c-intel 00:30:30.535 [job0] 00:30:30.535 filename=/dev/nvme0n1 00:30:30.535 [job1] 00:30:30.535 filename=/dev/nvme0n2 00:30:30.535 [job2] 00:30:30.535 filename=/dev/nvme0n3 00:30:30.535 [job3] 00:30:30.535 filename=/dev/nvme0n4 00:30:30.535 Could not set queue depth (nvme0n1) 00:30:30.535 Could not set queue depth (nvme0n2) 00:30:30.535 Could not set queue depth (nvme0n3) 00:30:30.535 Could not set queue depth (nvme0n4) 00:30:30.535 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:30.535 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:30.535 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:30.535 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:30:30.535 fio-3.35 00:30:30.535 Starting 4 threads 00:30:31.907 00:30:31.907 job0: (groupid=0, jobs=1): err= 0: pid=1407407: Tue Oct 8 18:37:49 2024 00:30:31.907 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:30:31.907 slat (usec): min=3, max=12553, avg=121.38, stdev=806.43 00:30:31.907 clat (usec): min=4630, max=51311, avg=14381.44, stdev=5896.84 00:30:31.907 lat (usec): min=4650, max=51330, avg=14502.81, stdev=5960.82 00:30:31.907 clat percentiles (usec): 00:30:31.907 | 1.00th=[ 7439], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10159], 00:30:31.907 | 30.00th=[10945], 40.00th=[12256], 50.00th=[13173], 60.00th=[13960], 00:30:31.907 | 70.00th=[14746], 80.00th=[17171], 90.00th=[21365], 95.00th=[25560], 00:30:31.907 | 99.00th=[39584], 99.50th=[45351], 99.90th=[51119], 99.95th=[51119], 00:30:31.907 | 99.99th=[51119] 00:30:31.907 write: IOPS=3602, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1011msec); 0 zone resets 00:30:31.907 slat (usec): min=4, max=12336, avg=145.67, stdev=730.59 00:30:31.907 clat (usec): min=2755, max=52006, avg=20996.15, stdev=12148.66 00:30:31.907 lat (usec): min=3966, max=52037, avg=21141.83, stdev=12232.26 00:30:31.907 clat percentiles (usec): 00:30:31.907 | 1.00th=[ 4883], 5.00th=[ 7898], 10.00th=[ 9503], 20.00th=[11600], 00:30:31.907 | 30.00th=[12649], 40.00th=[13566], 50.00th=[16909], 60.00th=[20055], 00:30:31.907 | 70.00th=[24511], 80.00th=[31327], 90.00th=[41681], 95.00th=[47449], 00:30:31.907 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167], 00:30:31.907 | 99.99th=[52167] 00:30:31.907 bw ( KiB/s): min=14032, max=14640, per=26.45%, avg=14336.00, stdev=429.92, samples=2 00:30:31.907 iops : min= 3508, max= 3660, avg=3584.00, stdev=107.48, samples=2 00:30:31.907 lat (msec) : 4=0.10%, 10=13.85%, 20=57.94%, 50=26.64%, 100=1.47% 00:30:31.907 cpu : usr=3.96%, sys=8.42%, ctx=349, majf=0, minf=1 00:30:31.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:30:31.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:31.907 issued rwts: total=3584,3642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:31.907 job1: (groupid=0, jobs=1): err= 0: pid=1407408: Tue Oct 8 18:37:49 2024 00:30:31.907 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:30:31.907 slat (usec): min=2, max=13074, avg=120.48, stdev=853.09 00:30:31.907 clat (usec): min=4261, max=37259, avg=15705.38, stdev=5347.38 00:30:31.907 lat (usec): min=4272, max=37272, avg=15825.86, stdev=5403.54 00:30:31.907 clat percentiles (usec): 00:30:31.907 | 1.00th=[ 6128], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10552], 00:30:31.907 | 30.00th=[11863], 40.00th=[13566], 50.00th=[14615], 60.00th=[16319], 00:30:31.907 | 70.00th=[18220], 80.00th=[21103], 90.00th=[23200], 95.00th=[25560], 00:30:31.907 | 99.00th=[28181], 99.50th=[29230], 99.90th=[30540], 99.95th=[34341], 00:30:31.907 | 99.99th=[37487] 00:30:31.907 write: IOPS=3864, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1007msec); 0 zone resets 00:30:31.907 slat (usec): min=3, max=13352, avg=125.55, stdev=778.24 00:30:31.907 clat (usec): min=775, max=95036, avg=18313.75, stdev=12119.76 00:30:31.907 lat (usec): min=1010, max=95042, avg=18439.30, stdev=12178.69 00:30:31.907 clat percentiles (usec): 00:30:31.907 | 1.00th=[ 2638], 5.00th=[ 5669], 10.00th=[ 8586], 20.00th=[10814], 00:30:31.907 | 30.00th=[11994], 40.00th=[13960], 50.00th=[14877], 60.00th=[16319], 00:30:31.907 | 70.00th=[19792], 80.00th=[21627], 90.00th=[34341], 95.00th=[41681], 00:30:31.907 | 99.00th=[72877], 99.50th=[82314], 99.90th=[88605], 99.95th=[88605], 00:30:31.907 | 99.99th=[94897] 00:30:31.907 bw ( KiB/s): min=12800, max=17320, per=27.79%, avg=15060.00, stdev=3196.12, samples=2 00:30:31.907 iops : min= 3200, max= 4330, avg=3765.00, stdev=799.03, samples=2 00:30:31.907 lat (usec) : 1000=0.03% 00:30:31.907 lat (msec) : 2=0.19%, 4=0.66%, 10=14.31%, 20=57.77%, 50=25.90% 00:30:31.907 lat (msec) : 100=1.15% 00:30:31.907 cpu : usr=3.48%, sys=6.16%, ctx=297, majf=0, minf=1 00:30:31.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:30:31.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:31.907 issued rwts: total=3584,3892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:31.907 job2: (groupid=0, jobs=1): err= 0: pid=1407409: Tue Oct 8 18:37:49 2024 00:30:31.907 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:30:31.907 slat (usec): min=3, max=13704, avg=140.07, stdev=900.34 00:30:31.907 clat (usec): min=9709, max=41096, avg=18324.57, stdev=5561.57 00:30:31.907 lat (usec): min=9722, max=44624, avg=18464.64, stdev=5617.52 00:30:31.907 clat percentiles (usec): 00:30:31.907 | 1.00th=[10421], 5.00th=[11207], 10.00th=[13042], 20.00th=[13960], 00:30:31.907 | 30.00th=[14746], 40.00th=[15401], 50.00th=[16909], 60.00th=[18220], 00:30:31.907 | 70.00th=[20055], 80.00th=[22414], 90.00th=[27132], 95.00th=[29230], 00:30:31.907 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:30:31.907 | 99.99th=[41157] 00:30:31.907 write: IOPS=3070, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1007msec); 0 zone resets 00:30:31.907 slat (usec): min=4, max=26518, avg=174.14, stdev=1195.20 00:30:31.907 clat (usec): min=2085, max=55305, avg=22458.59, stdev=9574.56 00:30:31.907 lat (usec): min=7093, max=55328, avg=22632.73, stdev=9657.38 00:30:31.907 clat percentiles (usec): 00:30:31.907 | 1.00th=[10290], 5.00th=[12256], 10.00th=[13042], 20.00th=[14353], 00:30:31.907 | 30.00th=[17171], 40.00th=[18744], 50.00th=[20317], 60.00th=[21365], 00:30:31.907 | 70.00th=[23725], 80.00th=[28705], 90.00th=[33817], 95.00th=[49021], 00:30:31.907 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[55313], 00:30:31.907 | 99.99th=[55313] 00:30:31.907 bw ( KiB/s): min=12288, max=12288, per=22.67%, avg=12288.00, stdev= 0.00, samples=2 00:30:31.907 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:30:31.907 lat (msec) : 4=0.02%, 10=0.60%, 20=57.75%, 50=40.22%, 100=1.41% 00:30:31.907 cpu : usr=3.08%, sys=8.35%, ctx=255, majf=0, minf=1 00:30:31.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:30:31.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:31.907 issued rwts: total=3072,3092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:31.907 job3: (groupid=0, jobs=1): err= 0: pid=1407410: Tue Oct 8 18:37:49 2024 00:30:31.907 read: IOPS=3008, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1008msec) 00:30:31.907 slat (usec): min=3, max=19157, avg=137.69, stdev=861.57 00:30:31.907 clat (usec): min=3180, max=56624, avg=17382.10, stdev=8293.43 00:30:31.907 lat (usec): min=3198, max=56628, avg=17519.79, stdev=8358.86 00:30:31.907 clat percentiles (usec): 00:30:31.907 | 1.00th=[ 3326], 5.00th=[10290], 10.00th=[11600], 20.00th=[12387], 00:30:31.907 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13698], 60.00th=[14615], 00:30:31.907 | 70.00th=[18482], 80.00th=[23462], 90.00th=[29230], 95.00th=[36439], 00:30:31.907 | 99.00th=[42730], 99.50th=[51643], 99.90th=[56361], 99.95th=[56361], 00:30:31.907 | 99.99th=[56886] 00:30:31.907 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:30:31.907 slat (usec): min=3, max=25818, avg=165.99, stdev=978.34 00:30:31.907 clat (usec): min=212, max=120705, avg=24489.02, stdev=20444.36 00:30:31.907 lat (usec): min=266, max=120713, avg=24655.01, stdev=20502.09 00:30:31.907 clat percentiles (usec): 00:30:31.907 | 1.00th=[ 889], 5.00th=[ 1467], 10.00th=[ 2638], 20.00th=[ 12125], 00:30:31.907 | 30.00th=[ 13960], 40.00th=[ 19530], 50.00th=[ 20317], 60.00th=[ 23462], 00:30:31.907 | 70.00th=[ 25297], 80.00th=[ 34866], 90.00th=[ 49021], 95.00th=[ 55837], 00:30:31.907 | 99.00th=[114820], 99.50th=[120062], 99.90th=[121111], 99.95th=[121111], 00:30:31.907 | 99.99th=[121111] 00:30:31.907 bw ( KiB/s): min=12288, max=12288, per=22.67%, avg=12288.00, stdev= 0.00, samples=2 00:30:31.907 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:30:31.907 lat (usec) : 250=0.02%, 500=0.08%, 750=0.16%, 1000=0.52% 00:30:31.907 lat (msec) : 2=3.08%, 4=3.75%, 10=1.08%, 20=50.65%, 50=35.61% 00:30:31.907 lat (msec) : 100=3.64%, 250=1.41% 00:30:31.907 cpu : usr=4.27%, sys=6.95%, ctx=358, majf=0, minf=2 00:30:31.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:30:31.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:31.907 issued rwts: total=3033,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:31.907 00:30:31.907 Run status group 0 (all jobs): 00:30:31.907 READ: bw=51.3MiB/s (53.8MB/s), 11.8MiB/s-13.9MiB/s (12.3MB/s-14.6MB/s), io=51.8MiB (54.4MB), run=1007-1011msec 00:30:31.907 WRITE: bw=52.9MiB/s (55.5MB/s), 11.9MiB/s-15.1MiB/s (12.5MB/s-15.8MB/s), io=53.5MiB (56.1MB), run=1007-1011msec 00:30:31.907 00:30:31.907 Disk stats (read/write): 00:30:31.907 nvme0n1: ios=2747/3072, merge=0/0, ticks=35932/68300, in_queue=104232, util=86.07% 00:30:31.907 nvme0n2: ios=3123/3442, merge=0/0, ticks=33942/51505, in_queue=85447, util=90.06% 00:30:31.907 nvme0n3: ios=2616/2661, merge=0/0, ticks=22221/29718, in_queue=51939, util=93.46% 00:30:31.907 nvme0n4: ios=2617/2752, merge=0/0, ticks=19678/35363, in_queue=55041, util=95.50% 00:30:31.907 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:30:31.907 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1407549 00:30:31.907 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:30:31.907 18:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:30:31.907 [global] 00:30:31.907 thread=1 00:30:31.907 invalidate=1 00:30:31.907 rw=read 00:30:31.907 time_based=1 00:30:31.907 runtime=10 00:30:31.907 ioengine=libaio 00:30:31.907 direct=1 00:30:31.907 bs=4096 00:30:31.907 iodepth=1 00:30:31.907 norandommap=1 00:30:31.907 numjobs=1 00:30:31.907 00:30:31.907 [job0] 00:30:31.907 filename=/dev/nvme0n1 00:30:31.907 [job1] 00:30:31.907 filename=/dev/nvme0n2 00:30:31.907 [job2] 00:30:31.907 filename=/dev/nvme0n3 00:30:31.907 [job3] 00:30:31.907 filename=/dev/nvme0n4 00:30:31.907 Could not set queue depth (nvme0n1) 00:30:31.907 Could not set queue depth (nvme0n2) 00:30:31.907 Could not set queue depth (nvme0n3) 00:30:31.907 Could not set queue depth (nvme0n4) 00:30:32.164 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.164 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.164 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.164 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:32.164 fio-3.35 00:30:32.164 Starting 4 threads 00:30:35.441 18:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:30:35.441 18:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:30:35.441 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=299008, buflen=4096 00:30:35.441 fio: pid=1407760, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:35.441 18:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:35.442 18:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:30:35.442 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=335872, buflen=4096 00:30:35.442 fio: pid=1407759, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:35.699 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=19443712, buflen=4096 00:30:35.699 fio: pid=1407757, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:35.699 18:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:35.699 18:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:30:35.959 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10817536, buflen=4096 00:30:35.959 fio: pid=1407758, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:30:35.959 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:35.959 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:30:35.959 00:30:35.959 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1407757: Tue Oct 8 18:37:54 2024 00:30:35.959 read: IOPS=1348, BW=5393KiB/s (5522kB/s)(18.5MiB/3521msec) 00:30:35.959 slat (usec): min=4, max=12865, avg= 8.50, stdev=186.65 00:30:35.959 clat (usec): min=183, max=41088, avg=727.05, stdev=4544.16 00:30:35.959 lat (usec): min=188, max=53951, avg=735.54, stdev=4573.83 00:30:35.959 clat percentiles (usec): 00:30:35.959 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 192], 20.00th=[ 194], 00:30:35.959 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:30:35.959 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 265], 95.00th=[ 289], 00:30:35.959 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:35.959 | 99.99th=[41157] 00:30:35.959 bw ( KiB/s): min= 96, max=18136, per=65.33%, avg=5182.67, stdev=8071.52, samples=6 00:30:35.959 iops : min= 24, max= 4534, avg=1295.67, stdev=2017.88, samples=6 00:30:35.959 lat (usec) : 250=87.99%, 500=10.51%, 750=0.21% 00:30:35.959 lat (msec) : 50=1.26% 00:30:35.959 cpu : usr=0.31%, sys=0.85%, ctx=4749, majf=0, minf=1 00:30:35.959 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.959 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.959 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.959 issued rwts: total=4748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.959 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:35.959 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1407758: Tue Oct 8 18:37:54 2024 00:30:35.959 read: IOPS=694, BW=2777KiB/s (2844kB/s)(10.3MiB/3804msec) 00:30:35.959 slat (usec): min=4, max=30734, avg=27.38, stdev=649.21 00:30:35.959 clat (usec): min=187, max=42308, avg=1407.21, stdev=6834.01 00:30:35.959 lat (usec): min=191, max=42315, avg=1434.60, stdev=6863.01 00:30:35.959 clat percentiles (usec): 00:30:35.959 | 1.00th=[ 194], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:30:35.959 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 221], 00:30:35.959 | 70.00th=[ 233], 80.00th=[ 255], 90.00th=[ 297], 95.00th=[ 449], 00:30:35.959 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:30:35.959 | 99.99th=[42206] 00:30:35.959 bw ( KiB/s): min= 160, max= 9198, per=23.12%, avg=1834.00, stdev=3279.56, samples=7 00:30:35.959 iops : min= 40, max= 2299, avg=458.43, stdev=819.70, samples=7 00:30:35.959 lat (usec) : 250=77.25%, 500=19.00%, 750=0.79% 00:30:35.959 lat (msec) : 10=0.04%, 20=0.04%, 50=2.84% 00:30:35.960 cpu : usr=0.24%, sys=0.71%, ctx=2646, majf=0, minf=1 00:30:35.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.960 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.960 issued rwts: total=2642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:35.960 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1407759: Tue Oct 8 18:37:54 2024 00:30:35.960 read: IOPS=25, BW=101KiB/s (103kB/s)(328KiB/3262msec) 00:30:35.960 slat (nsec): min=12337, max=40579, avg=22491.18, stdev=9952.23 00:30:35.960 clat (usec): min=248, max=41106, avg=39465.61, stdev=7674.69 00:30:35.960 lat (usec): min=270, max=41143, avg=39488.17, stdev=7675.32 00:30:35.960 clat percentiles (usec): 00:30:35.960 | 1.00th=[ 249], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:30:35.960 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:35.960 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:35.960 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:35.960 | 99.99th=[41157] 00:30:35.960 bw ( KiB/s): min= 96, max= 104, per=1.27%, avg=101.33, stdev= 4.13, samples=6 00:30:35.960 iops : min= 24, max= 26, avg=25.33, stdev= 1.03, samples=6 00:30:35.960 lat (usec) : 250=1.20%, 500=2.41% 00:30:35.960 lat (msec) : 50=95.18% 00:30:35.960 cpu : usr=0.09%, sys=0.00%, ctx=86, majf=0, minf=2 00:30:35.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.960 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.960 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:35.960 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1407760: Tue Oct 8 18:37:54 2024 00:30:35.960 read: IOPS=24, BW=98.5KiB/s (101kB/s)(292KiB/2963msec) 00:30:35.960 slat (nsec): min=9166, max=37167, avg=21995.88, stdev=9060.57 00:30:35.960 clat (usec): min=447, max=41148, avg=40419.20, stdev=4743.55 00:30:35.960 lat (usec): min=467, max=41158, avg=40441.26, stdev=4743.79 00:30:35.960 clat percentiles (usec): 00:30:35.960 | 1.00th=[ 449], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:30:35.960 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:35.960 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:35.960 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:35.960 | 99.99th=[41157] 00:30:35.960 bw ( KiB/s): min= 96, max= 104, per=1.25%, avg=99.20, stdev= 4.38, samples=5 00:30:35.960 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:30:35.960 lat (usec) : 500=1.35% 00:30:35.960 lat (msec) : 50=97.30% 00:30:35.960 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=2 00:30:35.960 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.960 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.960 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.960 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:35.960 00:30:35.960 Run status group 0 (all jobs): 00:30:35.960 READ: bw=7932KiB/s (8122kB/s), 98.5KiB/s-5393KiB/s (101kB/s-5522kB/s), io=29.5MiB (30.9MB), run=2963-3804msec 00:30:35.960 00:30:35.960 Disk stats (read/write): 00:30:35.960 nvme0n1: ios=4146/0, merge=0/0, ticks=3340/0, in_queue=3340, util=96.45% 00:30:35.960 nvme0n2: ios=1837/0, merge=0/0, ticks=3530/0, in_queue=3530, util=95.71% 00:30:35.960 nvme0n3: ios=120/0, merge=0/0, ticks=3546/0, in_queue=3546, util=98.88% 00:30:35.960 nvme0n4: ios=70/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.75% 00:30:36.253 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:36.253 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:30:36.533 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:36.533 18:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:30:36.790 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:36.790 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:30:37.048 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:30:37.048 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1407549 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:37.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:30:37.613 nvmf hotplug test: fio failed as expected 00:30:37.613 18:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.870 rmmod nvme_tcp 00:30:37.870 rmmod nvme_fabrics 00:30:37.870 rmmod nvme_keyring 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1405647 ']' 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1405647 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1405647 ']' 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1405647 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1405647 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1405647' 00:30:37.870 killing process with pid 1405647 00:30:37.870 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1405647 00:30:37.871 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1405647 00:30:38.128 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:38.128 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:38.129 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:38.129 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:30:38.129 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:30:38.129 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:38.129 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:30:38.129 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:38.129 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:38.129 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.129 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.129 18:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.656 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:40.656 00:30:40.656 real 0m24.152s 00:30:40.656 user 1m8.480s 00:30:40.656 sys 0m9.528s 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:40.657 ************************************ 00:30:40.657 END TEST nvmf_fio_target 00:30:40.657 ************************************ 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:40.657 ************************************ 00:30:40.657 START TEST nvmf_bdevio 00:30:40.657 ************************************ 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:30:40.657 * Looking for test storage... 00:30:40.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:40.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.657 --rc genhtml_branch_coverage=1 00:30:40.657 --rc genhtml_function_coverage=1 00:30:40.657 --rc genhtml_legend=1 00:30:40.657 --rc geninfo_all_blocks=1 00:30:40.657 --rc geninfo_unexecuted_blocks=1 00:30:40.657 00:30:40.657 ' 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:40.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.657 --rc genhtml_branch_coverage=1 00:30:40.657 --rc genhtml_function_coverage=1 00:30:40.657 --rc genhtml_legend=1 00:30:40.657 --rc geninfo_all_blocks=1 00:30:40.657 --rc geninfo_unexecuted_blocks=1 00:30:40.657 00:30:40.657 ' 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:40.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.657 --rc genhtml_branch_coverage=1 00:30:40.657 --rc genhtml_function_coverage=1 00:30:40.657 --rc genhtml_legend=1 00:30:40.657 --rc geninfo_all_blocks=1 00:30:40.657 --rc geninfo_unexecuted_blocks=1 00:30:40.657 00:30:40.657 ' 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:40.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:40.657 --rc genhtml_branch_coverage=1 00:30:40.657 --rc genhtml_function_coverage=1 00:30:40.657 --rc genhtml_legend=1 00:30:40.657 --rc geninfo_all_blocks=1 00:30:40.657 --rc geninfo_unexecuted_blocks=1 00:30:40.657 00:30:40.657 ' 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.657 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:30:40.658 18:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:42.559 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:42.559 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:42.559 Found net devices under 0000:09:00.0: cvl_0_0 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:42.559 Found net devices under 0000:09:00.1: cvl_0_1 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.559 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:30:42.560 00:30:42.560 --- 10.0.0.2 ping statistics --- 00:30:42.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.560 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:30:42.560 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:30:42.560 00:30:42.560 --- 10.0.0.1 ping statistics --- 00:30:42.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.560 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:30:42.560 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.560 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:30:42.560 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:42.560 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.560 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:42.560 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:42.560 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.560 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:42.560 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1410382 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1410382 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1410382 ']' 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:42.818 18:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:42.818 [2024-10-08 18:38:00.903351] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:42.818 [2024-10-08 18:38:00.904498] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:30:42.818 [2024-10-08 18:38:00.904571] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.818 [2024-10-08 18:38:00.970653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:42.818 [2024-10-08 18:38:01.080033] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.818 [2024-10-08 18:38:01.080113] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.818 [2024-10-08 18:38:01.080143] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.818 [2024-10-08 18:38:01.080155] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.818 [2024-10-08 18:38:01.080165] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.818 [2024-10-08 18:38:01.082031] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:30:42.818 [2024-10-08 18:38:01.082093] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:30:42.818 [2024-10-08 18:38:01.082161] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:30:42.818 [2024-10-08 18:38:01.082164] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.076 [2024-10-08 18:38:01.196024] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:43.076 [2024-10-08 18:38:01.196190] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:43.076 [2024-10-08 18:38:01.196471] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:43.076 [2024-10-08 18:38:01.197105] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:43.076 [2024-10-08 18:38:01.197387] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:43.076 [2024-10-08 18:38:01.242918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:43.076 Malloc0 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:43.076 [2024-10-08 18:38:01.299085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:43.076 { 00:30:43.076 "params": { 00:30:43.076 "name": "Nvme$subsystem", 00:30:43.076 "trtype": "$TEST_TRANSPORT", 00:30:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.076 "adrfam": "ipv4", 00:30:43.076 "trsvcid": "$NVMF_PORT", 00:30:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.076 "hdgst": ${hdgst:-false}, 00:30:43.076 "ddgst": ${ddgst:-false} 00:30:43.076 }, 00:30:43.076 "method": "bdev_nvme_attach_controller" 00:30:43.076 } 00:30:43.076 EOF 00:30:43.076 )") 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:30:43.076 18:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:43.076 "params": { 00:30:43.076 "name": "Nvme1", 00:30:43.076 "trtype": "tcp", 00:30:43.076 "traddr": "10.0.0.2", 00:30:43.076 "adrfam": "ipv4", 00:30:43.076 "trsvcid": "4420", 00:30:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:43.076 "hdgst": false, 00:30:43.076 "ddgst": false 00:30:43.076 }, 00:30:43.076 "method": "bdev_nvme_attach_controller" 00:30:43.076 }' 00:30:43.076 [2024-10-08 18:38:01.348641] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:30:43.076 [2024-10-08 18:38:01.348721] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410414 ] 00:30:43.333 [2024-10-08 18:38:01.409045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:43.333 [2024-10-08 18:38:01.526001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.333 [2024-10-08 18:38:01.526049] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.333 [2024-10-08 18:38:01.526053] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.591 I/O targets: 00:30:43.591 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:30:43.591 00:30:43.591 00:30:43.591 CUnit - A unit testing framework for C - Version 2.1-3 00:30:43.591 http://cunit.sourceforge.net/ 00:30:43.591 00:30:43.591 00:30:43.591 Suite: bdevio tests on: Nvme1n1 00:30:43.591 Test: blockdev write read block ...passed 00:30:43.591 Test: blockdev write zeroes read block ...passed 00:30:43.591 Test: blockdev write zeroes read no split ...passed 00:30:43.591 Test: blockdev write zeroes read split ...passed 00:30:43.591 Test: blockdev write zeroes read split partial ...passed 00:30:43.591 Test: blockdev reset ...[2024-10-08 18:38:01.851149] nvme_ctrlr.c:1744:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:43.591 [2024-10-08 18:38:01.851248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x973700 (9): Bad file descriptor 00:30:43.848 [2024-10-08 18:38:01.944452] bdev_nvme.c:2221:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:43.848 passed 00:30:43.848 Test: blockdev write read 8 blocks ...passed 00:30:43.848 Test: blockdev write read size > 128k ...passed 00:30:43.848 Test: blockdev write read invalid size ...passed 00:30:43.848 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:43.848 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:43.848 Test: blockdev write read max offset ...passed 00:30:43.848 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:43.848 Test: blockdev writev readv 8 blocks ...passed 00:30:43.848 Test: blockdev writev readv 30 x 1block ...passed 00:30:44.106 Test: blockdev writev readv block ...passed 00:30:44.106 Test: blockdev writev readv size > 128k ...passed 00:30:44.106 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:44.106 Test: blockdev comparev and writev ...[2024-10-08 18:38:02.154675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:44.106 [2024-10-08 18:38:02.154712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:44.106 [2024-10-08 18:38:02.154737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:44.106 [2024-10-08 18:38:02.154754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:44.106 [2024-10-08 18:38:02.155138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:44.106 [2024-10-08 18:38:02.155163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:44.106 [2024-10-08 18:38:02.155185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:44.106 [2024-10-08 18:38:02.155202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:44.106 [2024-10-08 18:38:02.155587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:44.106 [2024-10-08 18:38:02.155612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:44.106 [2024-10-08 18:38:02.155635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:44.106 [2024-10-08 18:38:02.155651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:44.106 [2024-10-08 18:38:02.156052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:44.106 [2024-10-08 18:38:02.156076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:44.106 [2024-10-08 18:38:02.156097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:30:44.106 [2024-10-08 18:38:02.156114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:44.106 passed 00:30:44.106 Test: blockdev nvme passthru rw ...passed 00:30:44.106 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:38:02.238597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.106 [2024-10-08 18:38:02.238626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:44.106 [2024-10-08 18:38:02.238784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.106 [2024-10-08 18:38:02.238808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:44.106 [2024-10-08 18:38:02.238958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.106 [2024-10-08 18:38:02.238981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:44.106 [2024-10-08 18:38:02.239135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:44.106 [2024-10-08 18:38:02.239164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:44.106 passed 00:30:44.106 Test: blockdev nvme admin passthru ...passed 00:30:44.106 Test: blockdev copy ...passed 00:30:44.106 00:30:44.106 Run Summary: Type Total Ran Passed Failed Inactive 00:30:44.106 suites 1 1 n/a 0 0 00:30:44.106 tests 23 23 23 0 0 00:30:44.106 asserts 152 152 152 0 n/a 00:30:44.106 00:30:44.106 Elapsed time = 1.176 seconds 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:44.364 rmmod nvme_tcp 00:30:44.364 rmmod nvme_fabrics 00:30:44.364 rmmod nvme_keyring 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1410382 ']' 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1410382 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1410382 ']' 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1410382 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1410382 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1410382' 00:30:44.364 killing process with pid 1410382 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1410382 00:30:44.364 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1410382 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.622 18:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.153 18:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.153 00:30:47.153 real 0m6.449s 00:30:47.153 user 0m8.558s 00:30:47.153 sys 0m2.496s 00:30:47.153 18:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:47.154 18:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:30:47.154 ************************************ 00:30:47.154 END TEST nvmf_bdevio 00:30:47.154 ************************************ 00:30:47.154 18:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:47.154 00:30:47.154 real 3m58.420s 00:30:47.154 user 9m4.327s 00:30:47.154 sys 1m24.247s 00:30:47.154 18:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:47.154 18:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:47.154 ************************************ 00:30:47.154 END TEST nvmf_target_core_interrupt_mode 00:30:47.154 ************************************ 00:30:47.154 18:38:04 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:47.154 18:38:04 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:47.154 18:38:04 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:47.154 18:38:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.154 ************************************ 00:30:47.154 START TEST nvmf_interrupt 00:30:47.154 ************************************ 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:30:47.154 * Looking for test storage... 00:30:47.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lcov --version 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:30:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.154 --rc genhtml_branch_coverage=1 00:30:47.154 --rc genhtml_function_coverage=1 00:30:47.154 --rc genhtml_legend=1 00:30:47.154 --rc geninfo_all_blocks=1 00:30:47.154 --rc geninfo_unexecuted_blocks=1 00:30:47.154 00:30:47.154 ' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:30:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.154 --rc genhtml_branch_coverage=1 00:30:47.154 --rc genhtml_function_coverage=1 00:30:47.154 --rc genhtml_legend=1 00:30:47.154 --rc geninfo_all_blocks=1 00:30:47.154 --rc geninfo_unexecuted_blocks=1 00:30:47.154 00:30:47.154 ' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:30:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.154 --rc genhtml_branch_coverage=1 00:30:47.154 --rc genhtml_function_coverage=1 00:30:47.154 --rc genhtml_legend=1 00:30:47.154 --rc geninfo_all_blocks=1 00:30:47.154 --rc geninfo_unexecuted_blocks=1 00:30:47.154 00:30:47.154 ' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:30:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.154 --rc genhtml_branch_coverage=1 00:30:47.154 --rc genhtml_function_coverage=1 00:30:47.154 --rc genhtml_legend=1 00:30:47.154 --rc geninfo_all_blocks=1 00:30:47.154 --rc geninfo_unexecuted_blocks=1 00:30:47.154 00:30:47.154 ' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:47.154 18:38:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:30:47.155 18:38:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:49.058 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:49.058 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:49.058 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:49.059 Found net devices under 0000:09:00.0: cvl_0_0 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:49.059 Found net devices under 0000:09:00.1: cvl_0_1 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:49.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:30:49.059 00:30:49.059 --- 10.0.0.2 ping statistics --- 00:30:49.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.059 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:30:49.059 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:30:49.317 00:30:49.317 --- 10.0.0.1 ping statistics --- 00:30:49.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.317 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1412619 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1412619 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1412619 ']' 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:49.317 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:49.317 [2024-10-08 18:38:07.427762] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:49.317 [2024-10-08 18:38:07.428831] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:30:49.317 [2024-10-08 18:38:07.428904] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.317 [2024-10-08 18:38:07.491157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:49.317 [2024-10-08 18:38:07.591572] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.317 [2024-10-08 18:38:07.591629] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.317 [2024-10-08 18:38:07.591653] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.317 [2024-10-08 18:38:07.591664] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.317 [2024-10-08 18:38:07.591674] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.317 [2024-10-08 18:38:07.592410] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.317 [2024-10-08 18:38:07.592416] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.678 [2024-10-08 18:38:07.683243] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:49.678 [2024-10-08 18:38:07.683332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:49.678 [2024-10-08 18:38:07.683530] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:30:49.678 5000+0 records in 00:30:49.678 5000+0 records out 00:30:49.678 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0147133 s, 696 MB/s 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:49.678 AIO0 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:49.678 [2024-10-08 18:38:07.785116] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:49.678 [2024-10-08 18:38:07.821423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1412619 0 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1412619 0 idle 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1412619 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1412619 -w 256 00:30:49.678 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1412619 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.30 reactor_0' 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1412619 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.30 reactor_0 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1412619 1 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1412619 1 idle 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1412619 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:49.935 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:49.936 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:49.936 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:49.936 18:38:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1412619 -w 256 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1412623 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1412623 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1412658 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1412619 0 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1412619 0 busy 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1412619 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1412619 -w 256 00:30:49.936 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:50.192 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1412619 root 20 0 128.2g 48384 35328 S 13.3 0.1 0:00.32 reactor_0' 00:30:50.192 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1412619 root 20 0 128.2g 48384 35328 S 13.3 0.1 0:00.32 reactor_0 00:30:50.192 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:50.192 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:50.192 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:30:50.192 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:30:50.192 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:50.192 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:50.192 18:38:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:30:51.123 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:30:51.123 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:51.123 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1412619 -w 256 00:30:51.123 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1412619 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:02.52 reactor_0' 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1412619 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:02.52 reactor_0 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1412619 1 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1412619 1 busy 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1412619 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1412619 -w 256 00:30:51.381 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1412623 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:01.27 reactor_1' 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1412623 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:01.27 reactor_1 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:51.639 18:38:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1412658 00:31:01.605 Initializing NVMe Controllers 00:31:01.605 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.605 Controller IO queue size 256, less than required. 00:31:01.605 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:01.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:01.605 Initialization complete. Launching workers. 00:31:01.605 ======================================================== 00:31:01.605 Latency(us) 00:31:01.605 Device Information : IOPS MiB/s Average min max 00:31:01.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13004.73 50.80 19699.87 4180.62 28822.31 00:31:01.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13798.82 53.90 18565.16 4698.63 22699.80 00:31:01.605 ======================================================== 00:31:01.605 Total : 26803.55 104.70 19115.71 4180.62 28822.31 00:31:01.605 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1412619 0 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1412619 0 idle 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1412619 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1412619 -w 256 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1412619 root 20 0 128.2g 48768 35328 S 6.2 0.1 0:20.26 reactor_0' 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1412619 root 20 0 128.2g 48768 35328 S 6.2 0.1 0:20.26 reactor_0 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1412619 1 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1412619 1 idle 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1412619 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1412619 -w 256 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1412623 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1' 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1412623 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.97 reactor_1 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:01.605 18:38:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:01.605 18:38:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:31:01.605 18:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:31:01.605 18:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:31:01.605 18:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:31:01.605 18:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1412619 0 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1412619 0 idle 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1412619 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1412619 -w 256 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1412619 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.35 reactor_0' 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1412619 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.35 reactor_0 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1412619 1 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1412619 1 idle 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1412619 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1412619 -w 256 00:31:02.982 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1412623 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1' 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1412623 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:03.241 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:31:03.241 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:03.499 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.500 rmmod nvme_tcp 00:31:03.500 rmmod nvme_fabrics 00:31:03.500 rmmod nvme_keyring 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1412619 ']' 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1412619 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1412619 ']' 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1412619 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1412619 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1412619' 00:31:03.500 killing process with pid 1412619 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1412619 00:31:03.500 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1412619 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:03.758 18:38:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.665 18:38:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:05.923 00:31:05.924 real 0m18.942s 00:31:05.924 user 0m37.770s 00:31:05.924 sys 0m6.306s 00:31:05.924 18:38:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.924 18:38:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:31:05.924 ************************************ 00:31:05.924 END TEST nvmf_interrupt 00:31:05.924 ************************************ 00:31:05.924 00:31:05.924 real 25m1.448s 00:31:05.924 user 58m42.528s 00:31:05.924 sys 6m42.633s 00:31:05.924 18:38:23 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.924 18:38:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.924 ************************************ 00:31:05.924 END TEST nvmf_tcp 00:31:05.924 ************************************ 00:31:05.924 18:38:24 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:31:05.924 18:38:24 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:05.924 18:38:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:05.924 18:38:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.924 18:38:24 -- common/autotest_common.sh@10 -- # set +x 00:31:05.924 ************************************ 00:31:05.924 START TEST spdkcli_nvmf_tcp 00:31:05.924 ************************************ 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:05.924 * Looking for test storage... 00:31:05.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:05.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.924 --rc genhtml_branch_coverage=1 00:31:05.924 --rc genhtml_function_coverage=1 00:31:05.924 --rc genhtml_legend=1 00:31:05.924 --rc geninfo_all_blocks=1 00:31:05.924 --rc geninfo_unexecuted_blocks=1 00:31:05.924 00:31:05.924 ' 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:05.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.924 --rc genhtml_branch_coverage=1 00:31:05.924 --rc genhtml_function_coverage=1 00:31:05.924 --rc genhtml_legend=1 00:31:05.924 --rc geninfo_all_blocks=1 00:31:05.924 --rc geninfo_unexecuted_blocks=1 00:31:05.924 00:31:05.924 ' 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:05.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.924 --rc genhtml_branch_coverage=1 00:31:05.924 --rc genhtml_function_coverage=1 00:31:05.924 --rc genhtml_legend=1 00:31:05.924 --rc geninfo_all_blocks=1 00:31:05.924 --rc geninfo_unexecuted_blocks=1 00:31:05.924 00:31:05.924 ' 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:05.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.924 --rc genhtml_branch_coverage=1 00:31:05.924 --rc genhtml_function_coverage=1 00:31:05.924 --rc genhtml_legend=1 00:31:05.924 --rc geninfo_all_blocks=1 00:31:05.924 --rc geninfo_unexecuted_blocks=1 00:31:05.924 00:31:05.924 ' 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.924 18:38:24 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:05.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1414668 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1414668 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1414668 ']' 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:05.925 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:06.183 [2024-10-08 18:38:24.229796] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:31:06.183 [2024-10-08 18:38:24.229890] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414668 ] 00:31:06.183 [2024-10-08 18:38:24.289050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:06.183 [2024-10-08 18:38:24.409310] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.183 [2024-10-08 18:38:24.409320] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:06.441 18:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:06.441 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:06.441 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:06.441 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:06.441 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:06.441 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:06.441 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:06.441 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:06.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:06.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:06.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:06.441 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:06.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:06.441 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:06.442 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:06.442 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:06.442 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:06.442 ' 00:31:08.970 [2024-10-08 18:38:27.159253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.344 [2024-10-08 18:38:28.431890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:12.871 [2024-10-08 18:38:30.819168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:14.768 [2024-10-08 18:38:32.837241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:16.138 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:16.138 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:16.138 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:16.138 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:16.138 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:16.138 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:16.138 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:16.138 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:16.138 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:16.138 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:16.138 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:16.138 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:16.396 18:38:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:16.396 18:38:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.396 18:38:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.396 18:38:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:16.396 18:38:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:16.396 18:38:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.396 18:38:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:31:16.396 18:38:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:16.961 18:38:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:16.961 18:38:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:16.961 18:38:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:16.961 18:38:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.961 18:38:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.961 18:38:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:16.961 18:38:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:16.961 18:38:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:16.961 18:38:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:16.961 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:16.961 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:16.961 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:16.961 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:16.961 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:16.961 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:16.961 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:16.961 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:16.961 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:16.961 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:16.961 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:16.961 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:16.961 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:16.961 ' 00:31:22.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:22.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:22.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:22.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:22.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:22.281 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:22.281 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:22.281 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:22.281 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:22.281 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:22.281 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:22.281 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:22.281 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:22.281 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1414668 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1414668 ']' 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1414668 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1414668 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1414668' 00:31:22.281 killing process with pid 1414668 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1414668 00:31:22.281 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1414668 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1414668 ']' 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1414668 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1414668 ']' 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1414668 00:31:22.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1414668) - No such process 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1414668 is not found' 00:31:22.539 Process with pid 1414668 is not found 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:22.539 00:31:22.539 real 0m16.788s 00:31:22.539 user 0m35.727s 00:31:22.539 sys 0m0.800s 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:22.539 18:38:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.539 ************************************ 00:31:22.539 END TEST spdkcli_nvmf_tcp 00:31:22.539 ************************************ 00:31:22.797 18:38:40 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:22.797 18:38:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:22.797 18:38:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:22.797 18:38:40 -- common/autotest_common.sh@10 -- # set +x 00:31:22.797 ************************************ 00:31:22.797 START TEST nvmf_identify_passthru 00:31:22.797 ************************************ 00:31:22.797 18:38:40 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:22.797 * Looking for test storage... 00:31:22.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:22.797 18:38:40 nvmf_identify_passthru -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:22.797 18:38:40 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lcov --version 00:31:22.797 18:38:40 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:22.797 18:38:40 nvmf_identify_passthru -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:22.797 18:38:40 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:31:22.797 18:38:40 nvmf_identify_passthru -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:22.797 18:38:40 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:22.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.797 --rc genhtml_branch_coverage=1 00:31:22.797 --rc genhtml_function_coverage=1 00:31:22.797 --rc genhtml_legend=1 00:31:22.797 --rc geninfo_all_blocks=1 00:31:22.798 --rc geninfo_unexecuted_blocks=1 00:31:22.798 00:31:22.798 ' 00:31:22.798 18:38:41 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.798 --rc genhtml_branch_coverage=1 00:31:22.798 --rc genhtml_function_coverage=1 00:31:22.798 --rc genhtml_legend=1 00:31:22.798 --rc geninfo_all_blocks=1 00:31:22.798 --rc geninfo_unexecuted_blocks=1 00:31:22.798 00:31:22.798 ' 00:31:22.798 18:38:41 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.798 --rc genhtml_branch_coverage=1 00:31:22.798 --rc genhtml_function_coverage=1 00:31:22.798 --rc genhtml_legend=1 00:31:22.798 --rc geninfo_all_blocks=1 00:31:22.798 --rc geninfo_unexecuted_blocks=1 00:31:22.798 00:31:22.798 ' 00:31:22.798 18:38:41 nvmf_identify_passthru -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:22.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:22.798 --rc genhtml_branch_coverage=1 00:31:22.798 --rc genhtml_function_coverage=1 00:31:22.798 --rc genhtml_legend=1 00:31:22.798 --rc geninfo_all_blocks=1 00:31:22.798 --rc geninfo_unexecuted_blocks=1 00:31:22.798 00:31:22.798 ' 00:31:22.798 18:38:41 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.798 18:38:41 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.798 18:38:41 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.798 18:38:41 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.798 18:38:41 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.798 18:38:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.798 18:38:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.798 18:38:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.798 18:38:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:22.798 18:38:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:22.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:22.798 18:38:41 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.798 18:38:41 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:31:22.798 18:38:41 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.798 18:38:41 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.798 18:38:41 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.798 18:38:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.798 18:38:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.798 18:38:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.798 18:38:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:31:22.798 18:38:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.798 18:38:41 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.798 18:38:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:22.798 18:38:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:22.798 18:38:41 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:31:22.798 18:38:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:25.328 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:25.328 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:25.328 Found net devices under 0000:09:00.0: cvl_0_0 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:25.328 Found net devices under 0000:09:00.1: cvl_0_1 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:25.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:25.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:31:25.328 00:31:25.328 --- 10.0.0.2 ping statistics --- 00:31:25.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.328 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:25.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:25.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:31:25.328 00:31:25.328 --- 10.0.0.1 ping statistics --- 00:31:25.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:25.328 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:25.328 18:38:43 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:25.328 18:38:43 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:25.328 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:25.328 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:25.328 18:38:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:25.328 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:31:25.329 18:38:43 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:0b:00.0 00:31:25.329 18:38:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:31:25.329 18:38:43 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:31:25.329 18:38:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:31:25.329 18:38:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:25.329 18:38:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:29.508 18:38:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:31:29.508 18:38:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:31:29.508 18:38:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:29.508 18:38:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:33.688 18:38:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:31:33.688 18:38:51 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.688 18:38:51 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.688 18:38:51 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1419302 00:31:33.688 18:38:51 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:33.688 18:38:51 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:33.688 18:38:51 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1419302 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1419302 ']' 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.688 [2024-10-08 18:38:51.655438] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:31:33.688 [2024-10-08 18:38:51.655527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.688 [2024-10-08 18:38:51.722606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:33.688 [2024-10-08 18:38:51.834411] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.688 [2024-10-08 18:38:51.834465] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.688 [2024-10-08 18:38:51.834479] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.688 [2024-10-08 18:38:51.834491] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.688 [2024-10-08 18:38:51.834501] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.688 [2024-10-08 18:38:51.836135] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.688 [2024-10-08 18:38:51.836192] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.688 [2024-10-08 18:38:51.836215] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:31:33.688 [2024-10-08 18:38:51.836219] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:31:33.688 18:38:51 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.688 INFO: Log level set to 20 00:31:33.688 INFO: Requests: 00:31:33.688 { 00:31:33.688 "jsonrpc": "2.0", 00:31:33.688 "method": "nvmf_set_config", 00:31:33.688 "id": 1, 00:31:33.688 "params": { 00:31:33.688 "admin_cmd_passthru": { 00:31:33.688 "identify_ctrlr": true 00:31:33.688 } 00:31:33.688 } 00:31:33.688 } 00:31:33.688 00:31:33.688 INFO: response: 00:31:33.688 { 00:31:33.688 "jsonrpc": "2.0", 00:31:33.688 "id": 1, 00:31:33.688 "result": true 00:31:33.688 } 00:31:33.688 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.688 18:38:51 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.688 18:38:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.688 INFO: Setting log level to 20 00:31:33.688 INFO: Setting log level to 20 00:31:33.688 INFO: Log level set to 20 00:31:33.688 INFO: Log level set to 20 00:31:33.688 INFO: Requests: 00:31:33.688 { 00:31:33.688 "jsonrpc": "2.0", 00:31:33.688 "method": "framework_start_init", 00:31:33.688 "id": 1 00:31:33.688 } 00:31:33.688 00:31:33.688 INFO: Requests: 00:31:33.688 { 00:31:33.688 "jsonrpc": "2.0", 00:31:33.688 "method": "framework_start_init", 00:31:33.688 "id": 1 00:31:33.688 } 00:31:33.688 00:31:33.946 [2024-10-08 18:38:52.006398] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:33.946 INFO: response: 00:31:33.946 { 00:31:33.946 "jsonrpc": "2.0", 00:31:33.946 "id": 1, 00:31:33.946 "result": true 00:31:33.946 } 00:31:33.946 00:31:33.946 INFO: response: 00:31:33.946 { 00:31:33.946 "jsonrpc": "2.0", 00:31:33.946 "id": 1, 00:31:33.946 "result": true 00:31:33.946 } 00:31:33.946 00:31:33.946 18:38:52 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.946 18:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:33.946 18:38:52 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.946 18:38:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.946 INFO: Setting log level to 40 00:31:33.946 INFO: Setting log level to 40 00:31:33.946 INFO: Setting log level to 40 00:31:33.946 [2024-10-08 18:38:52.016511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.946 18:38:52 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:33.946 18:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:33.946 18:38:52 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:33.946 18:38:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:33.946 18:38:52 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:31:33.946 18:38:52 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:33.946 18:38:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:37.220 Nvme0n1 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.220 18:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.220 18:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.220 18:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:37.220 [2024-10-08 18:38:54.904717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.220 18:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.220 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:37.220 [ 00:31:37.220 { 00:31:37.220 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:37.220 "subtype": "Discovery", 00:31:37.220 "listen_addresses": [], 00:31:37.220 "allow_any_host": true, 00:31:37.220 "hosts": [] 00:31:37.220 }, 00:31:37.220 { 00:31:37.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:37.220 "subtype": "NVMe", 00:31:37.220 "listen_addresses": [ 00:31:37.221 { 00:31:37.221 "trtype": "TCP", 00:31:37.221 "adrfam": "IPv4", 00:31:37.221 "traddr": "10.0.0.2", 00:31:37.221 "trsvcid": "4420" 00:31:37.221 } 00:31:37.221 ], 00:31:37.221 "allow_any_host": true, 00:31:37.221 "hosts": [], 00:31:37.221 "serial_number": "SPDK00000000000001", 00:31:37.221 "model_number": "SPDK bdev Controller", 00:31:37.221 "max_namespaces": 1, 00:31:37.221 "min_cntlid": 1, 00:31:37.221 "max_cntlid": 65519, 00:31:37.221 "namespaces": [ 00:31:37.221 { 00:31:37.221 "nsid": 1, 00:31:37.221 "bdev_name": "Nvme0n1", 00:31:37.221 "name": "Nvme0n1", 00:31:37.221 "nguid": "D006CE27BC384CC69984E145CB3D1A40", 00:31:37.221 "uuid": "d006ce27-bc38-4cc6-9984-e145cb3d1a40" 00:31:37.221 } 00:31:37.221 ] 00:31:37.221 } 00:31:37.221 ] 00:31:37.221 18:38:54 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.221 18:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:37.221 18:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:37.221 18:38:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:37.221 18:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:31:37.221 18:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:37.221 18:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:37.221 18:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:37.221 18:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:31:37.221 18:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:31:37.221 18:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:31:37.221 18:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:37.221 18:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:37.221 18:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.221 rmmod nvme_tcp 00:31:37.221 rmmod nvme_fabrics 00:31:37.221 rmmod nvme_keyring 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1419302 ']' 00:31:37.221 18:38:55 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1419302 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1419302 ']' 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1419302 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1419302 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1419302' 00:31:37.221 killing process with pid 1419302 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1419302 00:31:37.221 18:38:55 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1419302 00:31:39.118 18:38:56 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:39.118 18:38:56 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:39.118 18:38:56 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:39.118 18:38:56 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:31:39.118 18:38:56 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:31:39.118 18:38:56 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:39.118 18:38:56 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:31:39.118 18:38:56 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:39.118 18:38:56 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:39.118 18:38:56 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.118 18:38:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:39.118 18:38:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.019 18:38:58 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:41.019 00:31:41.019 real 0m18.076s 00:31:41.019 user 0m25.755s 00:31:41.019 sys 0m3.229s 00:31:41.019 18:38:58 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:41.019 18:38:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:41.019 ************************************ 00:31:41.019 END TEST nvmf_identify_passthru 00:31:41.019 ************************************ 00:31:41.019 18:38:58 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:41.019 18:38:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:41.019 18:38:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:41.019 18:38:58 -- common/autotest_common.sh@10 -- # set +x 00:31:41.019 ************************************ 00:31:41.019 START TEST nvmf_dif 00:31:41.019 ************************************ 00:31:41.019 18:38:58 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:41.019 * Looking for test storage... 00:31:41.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:41.019 18:38:59 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:31:41.019 18:38:59 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:31:41.019 18:38:59 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:31:41.019 18:38:59 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:31:41.019 18:38:59 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:41.019 18:38:59 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:31:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.019 --rc genhtml_branch_coverage=1 00:31:41.019 --rc genhtml_function_coverage=1 00:31:41.019 --rc genhtml_legend=1 00:31:41.019 --rc geninfo_all_blocks=1 00:31:41.019 --rc geninfo_unexecuted_blocks=1 00:31:41.019 00:31:41.019 ' 00:31:41.019 18:38:59 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:31:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.019 --rc genhtml_branch_coverage=1 00:31:41.019 --rc genhtml_function_coverage=1 00:31:41.019 --rc genhtml_legend=1 00:31:41.019 --rc geninfo_all_blocks=1 00:31:41.019 --rc geninfo_unexecuted_blocks=1 00:31:41.019 00:31:41.019 ' 00:31:41.019 18:38:59 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:31:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.019 --rc genhtml_branch_coverage=1 00:31:41.019 --rc genhtml_function_coverage=1 00:31:41.019 --rc genhtml_legend=1 00:31:41.019 --rc geninfo_all_blocks=1 00:31:41.019 --rc geninfo_unexecuted_blocks=1 00:31:41.019 00:31:41.019 ' 00:31:41.019 18:38:59 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:31:41.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:41.019 --rc genhtml_branch_coverage=1 00:31:41.019 --rc genhtml_function_coverage=1 00:31:41.019 --rc genhtml_legend=1 00:31:41.019 --rc geninfo_all_blocks=1 00:31:41.019 --rc geninfo_unexecuted_blocks=1 00:31:41.019 00:31:41.019 ' 00:31:41.019 18:38:59 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:41.019 18:38:59 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:41.019 18:38:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.019 18:38:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.019 18:38:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.019 18:38:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:41.019 18:38:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:41.019 18:38:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:41.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:41.020 18:38:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:41.020 18:38:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:41.020 18:38:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:41.020 18:38:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:41.020 18:38:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:41.020 18:38:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:41.020 18:38:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:41.020 18:38:59 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:31:41.020 18:38:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:42.917 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:42.917 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:42.917 18:39:01 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:42.918 Found net devices under 0000:09:00.0: cvl_0_0 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:42.918 Found net devices under 0000:09:00.1: cvl_0_1 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:42.918 18:39:01 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:43.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:31:43.175 00:31:43.175 --- 10.0.0.2 ping statistics --- 00:31:43.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.175 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:31:43.175 00:31:43.175 --- 10.0.0.1 ping statistics --- 00:31:43.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.175 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:31:43.175 18:39:01 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:44.108 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:44.108 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:44.108 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:44.108 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:44.108 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:44.108 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:44.108 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:44.108 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:44.108 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:44.108 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:44.108 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:44.108 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:44.365 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:44.365 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:44.365 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:44.365 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:44.365 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:44.365 18:39:02 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.365 18:39:02 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:44.365 18:39:02 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:44.365 18:39:02 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.365 18:39:02 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:44.365 18:39:02 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:44.365 18:39:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:44.365 18:39:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:44.365 18:39:02 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:44.365 18:39:02 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:44.365 18:39:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:44.365 18:39:02 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1422687 00:31:44.365 18:39:02 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:44.365 18:39:02 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1422687 00:31:44.365 18:39:02 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1422687 ']' 00:31:44.365 18:39:02 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.365 18:39:02 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:44.365 18:39:02 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.365 18:39:02 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:44.365 18:39:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:44.623 [2024-10-08 18:39:02.657057] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:31:44.623 [2024-10-08 18:39:02.657127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.623 [2024-10-08 18:39:02.720405] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.623 [2024-10-08 18:39:02.827437] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.623 [2024-10-08 18:39:02.827488] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.623 [2024-10-08 18:39:02.827502] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.623 [2024-10-08 18:39:02.827514] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.623 [2024-10-08 18:39:02.827524] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.623 [2024-10-08 18:39:02.828077] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.881 18:39:02 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:44.881 18:39:02 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:31:44.881 18:39:02 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:44.881 18:39:02 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:44.881 18:39:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:44.881 18:39:02 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.881 18:39:02 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:44.881 18:39:02 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:44.881 18:39:02 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.881 18:39:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:44.881 [2024-10-08 18:39:02.972671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:44.881 18:39:02 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.881 18:39:02 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:44.881 18:39:02 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:44.881 18:39:02 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:44.881 18:39:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:44.881 ************************************ 00:31:44.881 START TEST fio_dif_1_default 00:31:44.881 ************************************ 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:44.881 bdev_null0 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:44.881 [2024-10-08 18:39:03.028978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:44.881 { 00:31:44.881 "params": { 00:31:44.881 "name": "Nvme$subsystem", 00:31:44.881 "trtype": "$TEST_TRANSPORT", 00:31:44.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.881 "adrfam": "ipv4", 00:31:44.881 "trsvcid": "$NVMF_PORT", 00:31:44.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.881 "hdgst": ${hdgst:-false}, 00:31:44.881 "ddgst": ${ddgst:-false} 00:31:44.881 }, 00:31:44.881 "method": "bdev_nvme_attach_controller" 00:31:44.881 } 00:31:44.881 EOF 00:31:44.881 )") 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:44.881 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:44.882 "params": { 00:31:44.882 "name": "Nvme0", 00:31:44.882 "trtype": "tcp", 00:31:44.882 "traddr": "10.0.0.2", 00:31:44.882 "adrfam": "ipv4", 00:31:44.882 "trsvcid": "4420", 00:31:44.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:44.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:44.882 "hdgst": false, 00:31:44.882 "ddgst": false 00:31:44.882 }, 00:31:44.882 "method": "bdev_nvme_attach_controller" 00:31:44.882 }' 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:44.882 18:39:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.140 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:45.140 fio-3.35 00:31:45.140 Starting 1 thread 00:31:57.336 00:31:57.336 filename0: (groupid=0, jobs=1): err= 0: pid=1422914: Tue Oct 8 18:39:13 2024 00:31:57.336 read: IOPS=255, BW=1024KiB/s (1048kB/s)(10.0MiB/10020msec) 00:31:57.336 slat (nsec): min=4072, max=61487, avg=9637.04, stdev=2754.81 00:31:57.336 clat (usec): min=522, max=42397, avg=15601.62, stdev=19641.05 00:31:57.336 lat (usec): min=530, max=42411, avg=15611.26, stdev=19641.04 00:31:57.336 clat percentiles (usec): 00:31:57.336 | 1.00th=[ 545], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:31:57.336 | 30.00th=[ 586], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:31:57.336 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:57.336 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:57.336 | 99.99th=[42206] 00:31:57.336 bw ( KiB/s): min= 768, max= 2368, per=100.00%, avg=1024.00, stdev=367.36, samples=20 00:31:57.336 iops : min= 192, max= 592, avg=256.00, stdev=91.84, samples=20 00:31:57.336 lat (usec) : 750=63.03% 00:31:57.336 lat (msec) : 10=0.16%, 50=36.82% 00:31:57.336 cpu : usr=91.10%, sys=8.59%, ctx=14, majf=0, minf=245 00:31:57.336 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.336 issued rwts: total=2564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.336 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:57.336 00:31:57.336 Run status group 0 (all jobs): 00:31:57.336 READ: bw=1024KiB/s (1048kB/s), 1024KiB/s-1024KiB/s (1048kB/s-1048kB/s), io=10.0MiB (10.5MB), run=10020-10020msec 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.336 00:31:57.336 real 0m11.131s 00:31:57.336 user 0m10.264s 00:31:57.336 sys 0m1.132s 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 ************************************ 00:31:57.336 END TEST fio_dif_1_default 00:31:57.336 ************************************ 00:31:57.336 18:39:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:57.336 18:39:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:57.336 18:39:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 ************************************ 00:31:57.336 START TEST fio_dif_1_multi_subsystems 00:31:57.336 ************************************ 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 bdev_null0 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 [2024-10-08 18:39:14.216451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 bdev_null1 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:57.336 { 00:31:57.336 "params": { 00:31:57.336 "name": "Nvme$subsystem", 00:31:57.336 "trtype": "$TEST_TRANSPORT", 00:31:57.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.336 "adrfam": "ipv4", 00:31:57.336 "trsvcid": "$NVMF_PORT", 00:31:57.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.336 "hdgst": ${hdgst:-false}, 00:31:57.336 "ddgst": ${ddgst:-false} 00:31:57.336 }, 00:31:57.336 "method": "bdev_nvme_attach_controller" 00:31:57.336 } 00:31:57.336 EOF 00:31:57.336 )") 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:57.336 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:57.337 { 00:31:57.337 "params": { 00:31:57.337 "name": "Nvme$subsystem", 00:31:57.337 "trtype": "$TEST_TRANSPORT", 00:31:57.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:57.337 "adrfam": "ipv4", 00:31:57.337 "trsvcid": "$NVMF_PORT", 00:31:57.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:57.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:57.337 "hdgst": ${hdgst:-false}, 00:31:57.337 "ddgst": ${ddgst:-false} 00:31:57.337 }, 00:31:57.337 "method": "bdev_nvme_attach_controller" 00:31:57.337 } 00:31:57.337 EOF 00:31:57.337 )") 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:57.337 "params": { 00:31:57.337 "name": "Nvme0", 00:31:57.337 "trtype": "tcp", 00:31:57.337 "traddr": "10.0.0.2", 00:31:57.337 "adrfam": "ipv4", 00:31:57.337 "trsvcid": "4420", 00:31:57.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:57.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:57.337 "hdgst": false, 00:31:57.337 "ddgst": false 00:31:57.337 }, 00:31:57.337 "method": "bdev_nvme_attach_controller" 00:31:57.337 },{ 00:31:57.337 "params": { 00:31:57.337 "name": "Nvme1", 00:31:57.337 "trtype": "tcp", 00:31:57.337 "traddr": "10.0.0.2", 00:31:57.337 "adrfam": "ipv4", 00:31:57.337 "trsvcid": "4420", 00:31:57.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:57.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:57.337 "hdgst": false, 00:31:57.337 "ddgst": false 00:31:57.337 }, 00:31:57.337 "method": "bdev_nvme_attach_controller" 00:31:57.337 }' 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:57.337 18:39:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:57.337 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:57.337 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:57.337 fio-3.35 00:31:57.337 Starting 2 threads 00:32:07.299 00:32:07.299 filename0: (groupid=0, jobs=1): err= 0: pid=1424821: Tue Oct 8 18:39:25 2024 00:32:07.299 read: IOPS=98, BW=393KiB/s (402kB/s)(3936KiB/10014msec) 00:32:07.299 slat (nsec): min=6938, max=16972, avg=9812.06, stdev=2397.00 00:32:07.299 clat (usec): min=577, max=44049, avg=40673.39, stdev=3637.05 00:32:07.299 lat (usec): min=586, max=44063, avg=40683.21, stdev=3636.99 00:32:07.299 clat percentiles (usec): 00:32:07.299 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:07.299 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:07.299 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:07.299 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:32:07.299 | 99.99th=[44303] 00:32:07.299 bw ( KiB/s): min= 384, max= 448, per=33.21%, avg=392.00, stdev=17.60, samples=20 00:32:07.299 iops : min= 96, max= 112, avg=98.00, stdev= 4.40, samples=20 00:32:07.299 lat (usec) : 750=0.81% 00:32:07.299 lat (msec) : 50=99.19% 00:32:07.299 cpu : usr=95.06%, sys=4.65%, ctx=14, majf=0, minf=66 00:32:07.299 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.299 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.299 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:07.299 filename1: (groupid=0, jobs=1): err= 0: pid=1424822: Tue Oct 8 18:39:25 2024 00:32:07.299 read: IOPS=196, BW=787KiB/s (806kB/s)(7888KiB/10017msec) 00:32:07.299 slat (nsec): min=6426, max=30383, avg=9651.61, stdev=2347.71 00:32:07.299 clat (usec): min=524, max=42330, avg=20286.89, stdev=20345.67 00:32:07.299 lat (usec): min=533, max=42343, avg=20296.54, stdev=20345.47 00:32:07.299 clat percentiles (usec): 00:32:07.299 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 603], 00:32:07.299 | 30.00th=[ 619], 40.00th=[ 644], 50.00th=[ 865], 60.00th=[41157], 00:32:07.299 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:07.299 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:07.299 | 99.99th=[42206] 00:32:07.299 bw ( KiB/s): min= 736, max= 832, per=66.67%, avg=787.20, stdev=31.83, samples=20 00:32:07.299 iops : min= 184, max= 208, avg=196.80, stdev= 7.96, samples=20 00:32:07.299 lat (usec) : 750=49.59%, 1000=1.93% 00:32:07.299 lat (msec) : 4=0.20%, 50=48.28% 00:32:07.299 cpu : usr=95.56%, sys=4.13%, ctx=14, majf=0, minf=163 00:32:07.299 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.299 issued rwts: total=1972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.299 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:07.299 00:32:07.299 Run status group 0 (all jobs): 00:32:07.299 READ: bw=1180KiB/s (1209kB/s), 393KiB/s-787KiB/s (402kB/s-806kB/s), io=11.5MiB (12.1MB), run=10014-10017msec 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.299 00:32:07.299 real 0m11.344s 00:32:07.299 user 0m20.444s 00:32:07.299 sys 0m1.180s 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:07.299 18:39:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:32:07.299 ************************************ 00:32:07.299 END TEST fio_dif_1_multi_subsystems 00:32:07.299 ************************************ 00:32:07.299 18:39:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:07.299 18:39:25 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:07.299 18:39:25 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:07.299 18:39:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:07.299 ************************************ 00:32:07.299 START TEST fio_dif_rand_params 00:32:07.299 ************************************ 00:32:07.299 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:32:07.299 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:32:07.299 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:07.299 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:32:07.299 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:32:07.299 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:32:07.299 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:32:07.299 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.300 bdev_null0 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.300 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:07.557 [2024-10-08 18:39:25.601791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:07.557 { 00:32:07.557 "params": { 00:32:07.557 "name": "Nvme$subsystem", 00:32:07.557 "trtype": "$TEST_TRANSPORT", 00:32:07.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.557 "adrfam": "ipv4", 00:32:07.557 "trsvcid": "$NVMF_PORT", 00:32:07.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.557 "hdgst": ${hdgst:-false}, 00:32:07.557 "ddgst": ${ddgst:-false} 00:32:07.557 }, 00:32:07.557 "method": "bdev_nvme_attach_controller" 00:32:07.557 } 00:32:07.557 EOF 00:32:07.557 )") 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:07.557 "params": { 00:32:07.557 "name": "Nvme0", 00:32:07.557 "trtype": "tcp", 00:32:07.557 "traddr": "10.0.0.2", 00:32:07.557 "adrfam": "ipv4", 00:32:07.557 "trsvcid": "4420", 00:32:07.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:07.557 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:07.557 "hdgst": false, 00:32:07.557 "ddgst": false 00:32:07.557 }, 00:32:07.557 "method": "bdev_nvme_attach_controller" 00:32:07.557 }' 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:07.557 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:07.558 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:07.558 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:07.558 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:07.558 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:07.558 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:07.558 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:07.558 18:39:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:07.815 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:07.815 ... 00:32:07.815 fio-3.35 00:32:07.815 Starting 3 threads 00:32:14.361 00:32:14.361 filename0: (groupid=0, jobs=1): err= 0: pid=1426214: Tue Oct 8 18:39:31 2024 00:32:14.361 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(167MiB/5046msec) 00:32:14.361 slat (nsec): min=4548, max=63359, avg=15064.32, stdev=3474.86 00:32:14.361 clat (usec): min=4842, max=54245, avg=11270.65, stdev=4950.17 00:32:14.361 lat (usec): min=4851, max=54263, avg=11285.72, stdev=4950.15 00:32:14.361 clat percentiles (usec): 00:32:14.361 | 1.00th=[ 6849], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:32:14.361 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:32:14.361 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[12649], 00:32:14.361 | 99.00th=[49021], 99.50th=[51119], 99.90th=[53216], 99.95th=[54264], 00:32:14.361 | 99.99th=[54264] 00:32:14.361 bw ( KiB/s): min=26624, max=36864, per=38.63%, avg=34176.00, stdev=3209.37, samples=10 00:32:14.361 iops : min= 208, max= 288, avg=267.00, stdev=25.07, samples=10 00:32:14.361 lat (msec) : 10=22.74%, 20=75.77%, 50=0.52%, 100=0.97% 00:32:14.361 cpu : usr=85.55%, sys=9.36%, ctx=264, majf=0, minf=126 00:32:14.361 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.361 issued rwts: total=1337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.361 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:14.361 filename0: (groupid=0, jobs=1): err= 0: pid=1426215: Tue Oct 8 18:39:31 2024 00:32:14.361 read: IOPS=216, BW=27.0MiB/s (28.3MB/s)(136MiB/5044msec) 00:32:14.361 slat (nsec): min=4149, max=29166, avg=14628.40, stdev=1796.53 00:32:14.361 clat (usec): min=4855, max=50151, avg=13823.19, stdev=4085.41 00:32:14.361 lat (usec): min=4867, max=50178, avg=13837.82, stdev=4085.48 00:32:14.361 clat percentiles (usec): 00:32:14.361 | 1.00th=[ 4948], 5.00th=[ 8717], 10.00th=[10814], 20.00th=[11731], 00:32:14.361 | 30.00th=[12387], 40.00th=[13435], 50.00th=[14091], 60.00th=[14484], 00:32:14.361 | 70.00th=[14877], 80.00th=[15270], 90.00th=[16057], 95.00th=[16581], 00:32:14.361 | 99.00th=[45351], 99.50th=[46924], 99.90th=[48497], 99.95th=[50070], 00:32:14.361 | 99.99th=[50070] 00:32:14.361 bw ( KiB/s): min=25344, max=30976, per=31.48%, avg=27852.80, stdev=1829.40, samples=10 00:32:14.361 iops : min= 198, max= 242, avg=217.60, stdev=14.29, samples=10 00:32:14.361 lat (msec) : 10=6.97%, 20=92.02%, 50=0.92%, 100=0.09% 00:32:14.361 cpu : usr=93.99%, sys=5.51%, ctx=6, majf=0, minf=89 00:32:14.361 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.361 issued rwts: total=1090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.361 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:14.361 filename0: (groupid=0, jobs=1): err= 0: pid=1426216: Tue Oct 8 18:39:31 2024 00:32:14.361 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(133MiB/5044msec) 00:32:14.361 slat (nsec): min=3996, max=27183, avg=14718.58, stdev=1958.34 00:32:14.361 clat (usec): min=4747, max=53071, avg=14202.55, stdev=4821.01 00:32:14.361 lat (usec): min=4755, max=53088, avg=14217.26, stdev=4820.79 00:32:14.361 clat percentiles (usec): 00:32:14.361 | 1.00th=[ 7767], 5.00th=[10290], 10.00th=[11207], 20.00th=[11994], 00:32:14.361 | 30.00th=[13173], 40.00th=[13698], 50.00th=[14091], 60.00th=[14484], 00:32:14.361 | 70.00th=[14746], 80.00th=[15270], 90.00th=[15795], 95.00th=[16319], 00:32:14.361 | 99.00th=[46400], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:32:14.361 | 99.99th=[53216] 00:32:14.361 bw ( KiB/s): min=20736, max=29184, per=30.64%, avg=27110.40, stdev=2611.95, samples=10 00:32:14.362 iops : min= 162, max= 228, avg=211.80, stdev=20.41, samples=10 00:32:14.362 lat (msec) : 10=4.43%, 20=93.97%, 50=1.04%, 100=0.57% 00:32:14.362 cpu : usr=94.67%, sys=4.82%, ctx=32, majf=0, minf=71 00:32:14.362 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.362 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.362 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:14.362 00:32:14.362 Run status group 0 (all jobs): 00:32:14.362 READ: bw=86.4MiB/s (90.6MB/s), 26.3MiB/s-33.1MiB/s (27.6MB/s-34.7MB/s), io=436MiB (457MB), run=5044-5046msec 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 bdev_null0 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 [2024-10-08 18:39:31.823662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 bdev_null1 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 bdev_null2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:14.362 { 00:32:14.362 "params": { 00:32:14.362 "name": "Nvme$subsystem", 00:32:14.362 "trtype": "$TEST_TRANSPORT", 00:32:14.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.362 "adrfam": "ipv4", 00:32:14.362 "trsvcid": "$NVMF_PORT", 00:32:14.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.362 "hdgst": ${hdgst:-false}, 00:32:14.362 "ddgst": ${ddgst:-false} 00:32:14.362 }, 00:32:14.362 "method": "bdev_nvme_attach_controller" 00:32:14.362 } 00:32:14.362 EOF 00:32:14.362 )") 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:14.362 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:14.363 { 00:32:14.363 "params": { 00:32:14.363 "name": "Nvme$subsystem", 00:32:14.363 "trtype": "$TEST_TRANSPORT", 00:32:14.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.363 "adrfam": "ipv4", 00:32:14.363 "trsvcid": "$NVMF_PORT", 00:32:14.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.363 "hdgst": ${hdgst:-false}, 00:32:14.363 "ddgst": ${ddgst:-false} 00:32:14.363 }, 00:32:14.363 "method": "bdev_nvme_attach_controller" 00:32:14.363 } 00:32:14.363 EOF 00:32:14.363 )") 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:14.363 { 00:32:14.363 "params": { 00:32:14.363 "name": "Nvme$subsystem", 00:32:14.363 "trtype": "$TEST_TRANSPORT", 00:32:14.363 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.363 "adrfam": "ipv4", 00:32:14.363 "trsvcid": "$NVMF_PORT", 00:32:14.363 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.363 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.363 "hdgst": ${hdgst:-false}, 00:32:14.363 "ddgst": ${ddgst:-false} 00:32:14.363 }, 00:32:14.363 "method": "bdev_nvme_attach_controller" 00:32:14.363 } 00:32:14.363 EOF 00:32:14.363 )") 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:14.363 "params": { 00:32:14.363 "name": "Nvme0", 00:32:14.363 "trtype": "tcp", 00:32:14.363 "traddr": "10.0.0.2", 00:32:14.363 "adrfam": "ipv4", 00:32:14.363 "trsvcid": "4420", 00:32:14.363 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.363 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:14.363 "hdgst": false, 00:32:14.363 "ddgst": false 00:32:14.363 }, 00:32:14.363 "method": "bdev_nvme_attach_controller" 00:32:14.363 },{ 00:32:14.363 "params": { 00:32:14.363 "name": "Nvme1", 00:32:14.363 "trtype": "tcp", 00:32:14.363 "traddr": "10.0.0.2", 00:32:14.363 "adrfam": "ipv4", 00:32:14.363 "trsvcid": "4420", 00:32:14.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:14.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:14.363 "hdgst": false, 00:32:14.363 "ddgst": false 00:32:14.363 }, 00:32:14.363 "method": "bdev_nvme_attach_controller" 00:32:14.363 },{ 00:32:14.363 "params": { 00:32:14.363 "name": "Nvme2", 00:32:14.363 "trtype": "tcp", 00:32:14.363 "traddr": "10.0.0.2", 00:32:14.363 "adrfam": "ipv4", 00:32:14.363 "trsvcid": "4420", 00:32:14.363 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:14.363 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:14.363 "hdgst": false, 00:32:14.363 "ddgst": false 00:32:14.363 }, 00:32:14.363 "method": "bdev_nvme_attach_controller" 00:32:14.363 }' 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:14.363 18:39:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.363 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:14.363 ... 00:32:14.363 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:14.363 ... 00:32:14.363 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:14.363 ... 00:32:14.363 fio-3.35 00:32:14.363 Starting 24 threads 00:32:26.626 00:32:26.626 filename0: (groupid=0, jobs=1): err= 0: pid=1427005: Tue Oct 8 18:39:43 2024 00:32:26.626 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10011msec) 00:32:26.626 slat (usec): min=9, max=123, avg=39.20, stdev=21.44 00:32:26.626 clat (usec): min=16091, max=39316, avg=33361.41, stdev=1497.05 00:32:26.626 lat (usec): min=16173, max=39337, avg=33400.61, stdev=1494.04 00:32:26.626 clat percentiles (usec): 00:32:26.626 | 1.00th=[25822], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:32:26.626 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:26.626 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.626 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:32:26.626 | 99.99th=[39060] 00:32:26.626 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:32:26.626 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:32:26.626 lat (msec) : 20=0.34%, 50=99.66% 00:32:26.626 cpu : usr=98.31%, sys=1.26%, ctx=19, majf=0, minf=9 00:32:26.626 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:26.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.626 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.626 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.626 filename0: (groupid=0, jobs=1): err= 0: pid=1427006: Tue Oct 8 18:39:43 2024 00:32:26.626 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.4MiB/10002msec) 00:32:26.626 slat (usec): min=5, max=109, avg=38.41, stdev=24.48 00:32:26.626 clat (usec): min=19486, max=61860, avg=33557.56, stdev=1932.98 00:32:26.626 lat (usec): min=19510, max=61874, avg=33595.98, stdev=1929.31 00:32:26.626 clat percentiles (usec): 00:32:26.626 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:32:26.626 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:26.626 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.626 | 99.00th=[35914], 99.50th=[36963], 99.90th=[61604], 99.95th=[61604], 00:32:26.626 | 99.99th=[61604] 00:32:26.626 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:32:26.626 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:32:26.626 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:32:26.626 cpu : usr=98.06%, sys=1.43%, ctx=67, majf=0, minf=10 00:32:26.626 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:26.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.626 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.626 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.626 filename0: (groupid=0, jobs=1): err= 0: pid=1427008: Tue Oct 8 18:39:43 2024 00:32:26.626 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10009msec) 00:32:26.626 slat (nsec): min=3487, max=45522, avg=16736.02, stdev=7408.31 00:32:26.626 clat (usec): min=1666, max=40830, avg=32991.14, stdev=4494.03 00:32:26.626 lat (usec): min=1673, max=40860, avg=33007.88, stdev=4495.12 00:32:26.626 clat percentiles (usec): 00:32:26.626 | 1.00th=[ 3621], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:32:26.626 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:32:26.626 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:26.626 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:32:26.626 | 99.99th=[40633] 00:32:26.626 bw ( KiB/s): min= 1792, max= 2688, per=4.24%, avg=1926.40, stdev=187.91, samples=20 00:32:26.626 iops : min= 448, max= 672, avg=481.60, stdev=46.98, samples=20 00:32:26.626 lat (msec) : 2=0.95%, 4=0.70%, 10=0.33%, 20=0.33%, 50=97.68% 00:32:26.626 cpu : usr=97.08%, sys=1.83%, ctx=222, majf=0, minf=9 00:32:26.626 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:26.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.626 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.626 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.626 filename0: (groupid=0, jobs=1): err= 0: pid=1427009: Tue Oct 8 18:39:43 2024 00:32:26.626 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10014msec) 00:32:26.626 slat (usec): min=9, max=123, avg=40.37, stdev=15.29 00:32:26.626 clat (usec): min=14967, max=43838, avg=33490.35, stdev=1318.09 00:32:26.626 lat (usec): min=14978, max=43866, avg=33530.72, stdev=1317.96 00:32:26.626 clat percentiles (usec): 00:32:26.626 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:26.627 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:26.627 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.627 | 99.00th=[35914], 99.50th=[39060], 99.90th=[43779], 99.95th=[43779], 00:32:26.627 | 99.99th=[43779] 00:32:26.627 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1886.32, stdev=56.16, samples=19 00:32:26.627 iops : min= 448, max= 480, avg=471.58, stdev=14.04, samples=19 00:32:26.627 lat (msec) : 20=0.34%, 50=99.66% 00:32:26.627 cpu : usr=98.05%, sys=1.38%, ctx=42, majf=0, minf=9 00:32:26.627 IO depths : 1=2.8%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.627 filename0: (groupid=0, jobs=1): err= 0: pid=1427010: Tue Oct 8 18:39:43 2024 00:32:26.627 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10007msec) 00:32:26.627 slat (nsec): min=10089, max=89820, avg=40797.38, stdev=12629.05 00:32:26.627 clat (usec): min=7600, max=64326, avg=33429.74, stdev=2613.01 00:32:26.627 lat (usec): min=7610, max=64367, avg=33470.54, stdev=2613.58 00:32:26.627 clat percentiles (usec): 00:32:26.627 | 1.00th=[27919], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:26.627 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:26.627 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.627 | 99.00th=[35390], 99.50th=[39060], 99.90th=[64226], 99.95th=[64226], 00:32:26.627 | 99.99th=[64226] 00:32:26.627 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:32:26.627 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:32:26.627 lat (msec) : 10=0.34%, 20=0.34%, 50=98.99%, 100=0.34% 00:32:26.627 cpu : usr=97.67%, sys=1.56%, ctx=134, majf=0, minf=9 00:32:26.627 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.627 filename0: (groupid=0, jobs=1): err= 0: pid=1427011: Tue Oct 8 18:39:43 2024 00:32:26.627 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:32:26.627 slat (usec): min=10, max=137, avg=45.74, stdev=21.29 00:32:26.627 clat (usec): min=20417, max=39404, avg=33359.48, stdev=897.65 00:32:26.627 lat (usec): min=20475, max=39424, avg=33405.22, stdev=899.66 00:32:26.627 clat percentiles (usec): 00:32:26.627 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:26.627 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:26.627 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.627 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:32:26.627 | 99.99th=[39584] 00:32:26.627 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:32:26.627 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:32:26.627 lat (msec) : 50=100.00% 00:32:26.627 cpu : usr=98.19%, sys=1.37%, ctx=10, majf=0, minf=9 00:32:26.627 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.627 filename0: (groupid=0, jobs=1): err= 0: pid=1427012: Tue Oct 8 18:39:43 2024 00:32:26.627 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:32:26.627 slat (usec): min=7, max=158, avg=47.31, stdev=27.18 00:32:26.627 clat (usec): min=22593, max=39114, avg=33412.66, stdev=898.10 00:32:26.627 lat (usec): min=22664, max=39201, avg=33459.97, stdev=892.48 00:32:26.627 clat percentiles (usec): 00:32:26.627 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:32:26.627 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:26.627 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.627 | 99.00th=[35390], 99.50th=[36439], 99.90th=[39060], 99.95th=[39060], 00:32:26.627 | 99.99th=[39060] 00:32:26.627 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:32:26.627 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:32:26.627 lat (msec) : 50=100.00% 00:32:26.627 cpu : usr=98.07%, sys=1.50%, ctx=26, majf=0, minf=9 00:32:26.627 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.627 filename0: (groupid=0, jobs=1): err= 0: pid=1427013: Tue Oct 8 18:39:43 2024 00:32:26.627 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10015msec) 00:32:26.627 slat (nsec): min=8426, max=59765, avg=23273.70, stdev=9067.34 00:32:26.627 clat (usec): min=14532, max=56075, avg=33604.91, stdev=1017.82 00:32:26.627 lat (usec): min=14582, max=56103, avg=33628.18, stdev=1017.57 00:32:26.627 clat percentiles (usec): 00:32:26.627 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:32:26.627 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:26.627 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.627 | 99.00th=[35914], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:32:26.627 | 99.99th=[55837] 00:32:26.627 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:32:26.627 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:32:26.627 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:32:26.627 cpu : usr=98.20%, sys=1.39%, ctx=13, majf=0, minf=9 00:32:26.627 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.627 filename1: (groupid=0, jobs=1): err= 0: pid=1427014: Tue Oct 8 18:39:43 2024 00:32:26.627 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.4MiB/10001msec) 00:32:26.627 slat (nsec): min=7312, max=91389, avg=38448.54, stdev=10767.40 00:32:26.627 clat (usec): min=16566, max=66345, avg=33540.43, stdev=2228.20 00:32:26.627 lat (usec): min=16612, max=66374, avg=33578.88, stdev=2227.19 00:32:26.627 clat percentiles (usec): 00:32:26.627 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:26.627 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:26.627 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.627 | 99.00th=[35390], 99.50th=[39060], 99.90th=[66323], 99.95th=[66323], 00:32:26.627 | 99.99th=[66323] 00:32:26.627 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1879.74, stdev=74.07, samples=19 00:32:26.627 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:32:26.627 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:32:26.627 cpu : usr=97.16%, sys=1.84%, ctx=81, majf=0, minf=9 00:32:26.627 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.627 filename1: (groupid=0, jobs=1): err= 0: pid=1427016: Tue Oct 8 18:39:43 2024 00:32:26.627 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:32:26.627 slat (usec): min=9, max=106, avg=32.74, stdev=12.50 00:32:26.627 clat (usec): min=20579, max=39311, avg=33548.12, stdev=881.05 00:32:26.627 lat (usec): min=20613, max=39355, avg=33580.86, stdev=879.66 00:32:26.627 clat percentiles (usec): 00:32:26.627 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:32:26.627 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:26.627 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.627 | 99.00th=[35914], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:32:26.627 | 99.99th=[39060] 00:32:26.627 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:32:26.627 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:32:26.627 lat (msec) : 50=100.00% 00:32:26.627 cpu : usr=96.86%, sys=2.03%, ctx=151, majf=0, minf=9 00:32:26.627 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.627 filename1: (groupid=0, jobs=1): err= 0: pid=1427017: Tue Oct 8 18:39:43 2024 00:32:26.627 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:32:26.627 slat (nsec): min=14419, max=92077, avg=39695.90, stdev=11923.86 00:32:26.627 clat (usec): min=23307, max=39192, avg=33482.08, stdev=848.25 00:32:26.627 lat (usec): min=23345, max=39214, avg=33521.78, stdev=848.24 00:32:26.627 clat percentiles (usec): 00:32:26.627 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:26.627 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:26.627 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.627 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:32:26.627 | 99.99th=[39060] 00:32:26.627 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:32:26.627 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:32:26.627 lat (msec) : 50=100.00% 00:32:26.627 cpu : usr=97.10%, sys=1.95%, ctx=110, majf=0, minf=9 00:32:26.627 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.627 filename1: (groupid=0, jobs=1): err= 0: pid=1427018: Tue Oct 8 18:39:43 2024 00:32:26.627 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10011msec) 00:32:26.627 slat (usec): min=8, max=162, avg=24.77, stdev=16.04 00:32:26.627 clat (usec): min=10152, max=39319, avg=33498.48, stdev=1521.05 00:32:26.627 lat (usec): min=10287, max=39336, avg=33523.25, stdev=1516.45 00:32:26.627 clat percentiles (usec): 00:32:26.627 | 1.00th=[25822], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:32:26.627 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:26.627 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:26.627 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:32:26.627 | 99.99th=[39060] 00:32:26.627 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:32:26.627 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:32:26.627 lat (msec) : 20=0.34%, 50=99.66% 00:32:26.627 cpu : usr=98.36%, sys=1.25%, ctx=24, majf=0, minf=9 00:32:26.627 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.627 filename1: (groupid=0, jobs=1): err= 0: pid=1427019: Tue Oct 8 18:39:43 2024 00:32:26.627 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10007msec) 00:32:26.627 slat (usec): min=7, max=112, avg=48.80, stdev=21.60 00:32:26.627 clat (usec): min=7570, max=63891, avg=33389.63, stdev=2634.04 00:32:26.627 lat (usec): min=7578, max=63936, avg=33438.42, stdev=2632.95 00:32:26.627 clat percentiles (usec): 00:32:26.627 | 1.00th=[28181], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:32:26.627 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:26.627 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.627 | 99.00th=[35390], 99.50th=[39060], 99.90th=[63701], 99.95th=[63701], 00:32:26.627 | 99.99th=[63701] 00:32:26.627 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=71.83, samples=19 00:32:26.627 iops : min= 416, max= 480, avg=469.89, stdev=17.96, samples=19 00:32:26.627 lat (msec) : 10=0.38%, 20=0.30%, 50=98.99%, 100=0.34% 00:32:26.627 cpu : usr=98.38%, sys=1.20%, ctx=15, majf=0, minf=9 00:32:26.627 IO depths : 1=2.7%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.627 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.627 filename1: (groupid=0, jobs=1): err= 0: pid=1427020: Tue Oct 8 18:39:43 2024 00:32:26.627 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10016msec) 00:32:26.627 slat (nsec): min=5908, max=48385, avg=21447.14, stdev=5194.12 00:32:26.627 clat (usec): min=19545, max=44503, avg=33647.27, stdev=1211.45 00:32:26.627 lat (usec): min=19569, max=44517, avg=33668.72, stdev=1211.10 00:32:26.627 clat percentiles (usec): 00:32:26.627 | 1.00th=[31327], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:32:26.627 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:26.627 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:26.627 | 99.00th=[36963], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:32:26.627 | 99.99th=[44303] 00:32:26.627 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1886.32, stdev=56.16, samples=19 00:32:26.627 iops : min= 448, max= 480, avg=471.58, stdev=14.04, samples=19 00:32:26.627 lat (msec) : 20=0.30%, 50=99.70% 00:32:26.627 cpu : usr=98.38%, sys=1.23%, ctx=13, majf=0, minf=10 00:32:26.627 IO depths : 1=4.1%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:32:26.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.628 filename1: (groupid=0, jobs=1): err= 0: pid=1427021: Tue Oct 8 18:39:43 2024 00:32:26.628 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:32:26.628 slat (nsec): min=13967, max=92082, avg=39093.75, stdev=12719.32 00:32:26.628 clat (usec): min=23307, max=39279, avg=33498.47, stdev=850.08 00:32:26.628 lat (usec): min=23345, max=39301, avg=33537.56, stdev=849.46 00:32:26.628 clat percentiles (usec): 00:32:26.628 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:26.628 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:26.628 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.628 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:32:26.628 | 99.99th=[39060] 00:32:26.628 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:32:26.628 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:32:26.628 lat (msec) : 50=100.00% 00:32:26.628 cpu : usr=98.24%, sys=1.35%, ctx=13, majf=0, minf=9 00:32:26.628 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:26.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.628 filename1: (groupid=0, jobs=1): err= 0: pid=1427022: Tue Oct 8 18:39:43 2024 00:32:26.628 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10013msec) 00:32:26.628 slat (usec): min=11, max=140, avg=43.37, stdev=14.09 00:32:26.628 clat (usec): min=16509, max=43837, avg=33440.26, stdev=1292.36 00:32:26.628 lat (usec): min=16530, max=43866, avg=33483.62, stdev=1292.35 00:32:26.628 clat percentiles (usec): 00:32:26.628 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:26.628 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:26.628 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.628 | 99.00th=[35390], 99.50th=[38536], 99.90th=[43779], 99.95th=[43779], 00:32:26.628 | 99.99th=[43779] 00:32:26.628 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1886.32, stdev=57.91, samples=19 00:32:26.628 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:32:26.628 lat (msec) : 20=0.34%, 50=99.66% 00:32:26.628 cpu : usr=98.23%, sys=1.35%, ctx=13, majf=0, minf=9 00:32:26.628 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:26.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.628 filename2: (groupid=0, jobs=1): err= 0: pid=1427024: Tue Oct 8 18:39:43 2024 00:32:26.628 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10004msec) 00:32:26.628 slat (nsec): min=3992, max=92077, avg=21788.05, stdev=9038.40 00:32:26.628 clat (usec): min=18483, max=39194, avg=33614.88, stdev=1174.41 00:32:26.628 lat (usec): min=18493, max=39215, avg=33636.67, stdev=1173.76 00:32:26.628 clat percentiles (usec): 00:32:26.628 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:32:26.628 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:26.628 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:26.628 | 99.00th=[35914], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:32:26.628 | 99.99th=[39060] 00:32:26.628 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1893.05, stdev=53.61, samples=19 00:32:26.628 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:32:26.628 lat (msec) : 20=0.30%, 50=99.70% 00:32:26.628 cpu : usr=97.24%, sys=1.91%, ctx=159, majf=0, minf=9 00:32:26.628 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:26.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.628 filename2: (groupid=0, jobs=1): err= 0: pid=1427025: Tue Oct 8 18:39:43 2024 00:32:26.628 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.4MiB/10002msec) 00:32:26.628 slat (usec): min=8, max=105, avg=52.86, stdev=19.17 00:32:26.628 clat (usec): min=16450, max=59578, avg=33455.93, stdev=2023.00 00:32:26.628 lat (usec): min=16488, max=59618, avg=33508.78, stdev=2022.11 00:32:26.628 clat percentiles (usec): 00:32:26.628 | 1.00th=[29754], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:32:26.628 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:26.628 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.628 | 99.00th=[36963], 99.50th=[42206], 99.90th=[59507], 99.95th=[59507], 00:32:26.628 | 99.99th=[59507] 00:32:26.628 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=73.20, samples=19 00:32:26.628 iops : min= 416, max= 480, avg=469.89, stdev=18.30, samples=19 00:32:26.628 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:32:26.628 cpu : usr=98.25%, sys=1.33%, ctx=12, majf=0, minf=9 00:32:26.628 IO depths : 1=4.3%, 2=10.5%, 4=24.9%, 8=52.1%, 16=8.2%, 32=0.0%, >=64=0.0% 00:32:26.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.628 filename2: (groupid=0, jobs=1): err= 0: pid=1427026: Tue Oct 8 18:39:43 2024 00:32:26.628 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10015msec) 00:32:26.628 slat (nsec): min=8459, max=48318, avg=22119.40, stdev=5844.24 00:32:26.628 clat (usec): min=25720, max=39755, avg=33627.13, stdev=838.16 00:32:26.628 lat (usec): min=25756, max=39783, avg=33649.25, stdev=837.94 00:32:26.628 clat percentiles (usec): 00:32:26.628 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:32:26.628 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:26.628 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:26.628 | 99.00th=[35914], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:32:26.628 | 99.99th=[39584] 00:32:26.628 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=56.87, samples=20 00:32:26.628 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:32:26.628 lat (msec) : 50=100.00% 00:32:26.628 cpu : usr=97.48%, sys=1.66%, ctx=103, majf=0, minf=9 00:32:26.628 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:26.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.628 filename2: (groupid=0, jobs=1): err= 0: pid=1427027: Tue Oct 8 18:39:43 2024 00:32:26.628 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10006msec) 00:32:26.628 slat (usec): min=9, max=148, avg=43.89, stdev=21.75 00:32:26.628 clat (usec): min=7181, max=63901, avg=33346.63, stdev=2592.59 00:32:26.628 lat (usec): min=7232, max=63946, avg=33390.51, stdev=2594.13 00:32:26.628 clat percentiles (usec): 00:32:26.628 | 1.00th=[27919], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:32:26.628 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:26.628 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.628 | 99.00th=[35390], 99.50th=[39060], 99.90th=[63701], 99.95th=[63701], 00:32:26.628 | 99.99th=[63701] 00:32:26.628 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:32:26.628 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:32:26.628 lat (msec) : 10=0.34%, 20=0.34%, 50=98.99%, 100=0.34% 00:32:26.628 cpu : usr=98.39%, sys=1.17%, ctx=13, majf=0, minf=9 00:32:26.628 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:26.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.628 filename2: (groupid=0, jobs=1): err= 0: pid=1427028: Tue Oct 8 18:39:43 2024 00:32:26.628 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.4MiB/10001msec) 00:32:26.628 slat (nsec): min=13459, max=96463, avg=38247.42, stdev=9730.24 00:32:26.628 clat (usec): min=16358, max=66079, avg=33553.87, stdev=2208.98 00:32:26.628 lat (usec): min=16376, max=66115, avg=33592.12, stdev=2208.64 00:32:26.628 clat percentiles (usec): 00:32:26.628 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:32:26.628 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:32:26.628 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.628 | 99.00th=[35390], 99.50th=[39060], 99.90th=[65799], 99.95th=[65799], 00:32:26.628 | 99.99th=[66323] 00:32:26.628 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:32:26.628 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:32:26.628 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:32:26.628 cpu : usr=98.28%, sys=1.27%, ctx=31, majf=0, minf=9 00:32:26.628 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:26.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.628 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.628 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.628 filename2: (groupid=0, jobs=1): err= 0: pid=1427029: Tue Oct 8 18:39:43 2024 00:32:26.628 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10006msec) 00:32:26.628 slat (usec): min=13, max=108, avg=51.14, stdev=17.17 00:32:26.628 clat (usec): min=6983, max=63904, avg=33341.97, stdev=2595.74 00:32:26.628 lat (usec): min=7018, max=63945, avg=33393.11, stdev=2595.49 00:32:26.628 clat percentiles (usec): 00:32:26.628 | 1.00th=[28181], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:32:26.628 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:32:26.628 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.629 | 99.00th=[35390], 99.50th=[39060], 99.90th=[63701], 99.95th=[63701], 00:32:26.629 | 99.99th=[63701] 00:32:26.629 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1879.58, stdev=74.55, samples=19 00:32:26.629 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:32:26.629 lat (msec) : 10=0.34%, 20=0.34%, 50=98.99%, 100=0.34% 00:32:26.629 cpu : usr=98.34%, sys=1.12%, ctx=73, majf=0, minf=9 00:32:26.629 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:32:26.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.629 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.629 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.629 filename2: (groupid=0, jobs=1): err= 0: pid=1427030: Tue Oct 8 18:39:43 2024 00:32:26.629 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10011msec) 00:32:26.629 slat (nsec): min=8480, max=98685, avg=23212.11, stdev=13744.50 00:32:26.629 clat (usec): min=13292, max=39324, avg=33513.66, stdev=1542.27 00:32:26.629 lat (usec): min=13350, max=39340, avg=33536.88, stdev=1538.25 00:32:26.629 clat percentiles (usec): 00:32:26.629 | 1.00th=[26346], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:32:26.629 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:32:26.629 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:32:26.629 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:32:26.629 | 99.99th=[39584] 00:32:26.629 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1894.40, stdev=52.53, samples=20 00:32:26.629 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:32:26.629 lat (msec) : 20=0.34%, 50=99.66% 00:32:26.629 cpu : usr=98.40%, sys=1.20%, ctx=16, majf=0, minf=9 00:32:26.629 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:26.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.629 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.629 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.629 filename2: (groupid=0, jobs=1): err= 0: pid=1427031: Tue Oct 8 18:39:43 2024 00:32:26.629 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10012msec) 00:32:26.629 slat (nsec): min=9025, max=78905, avg=30998.54, stdev=10324.59 00:32:26.629 clat (usec): min=23406, max=39271, avg=33569.10, stdev=848.26 00:32:26.629 lat (usec): min=23417, max=39297, avg=33600.10, stdev=846.97 00:32:26.629 clat percentiles (usec): 00:32:26.629 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:32:26.629 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:32:26.629 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:32:26.629 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39060], 99.95th=[39060], 00:32:26.629 | 99.99th=[39060] 00:32:26.629 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1888.00, stdev=55.97, samples=20 00:32:26.629 iops : min= 448, max= 480, avg=472.00, stdev=13.99, samples=20 00:32:26.629 lat (msec) : 50=100.00% 00:32:26.629 cpu : usr=98.19%, sys=1.41%, ctx=20, majf=0, minf=9 00:32:26.629 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:32:26.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.629 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.629 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.629 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:26.629 00:32:26.629 Run status group 0 (all jobs): 00:32:26.629 READ: bw=44.4MiB/s (46.5MB/s), 1888KiB/s-1931KiB/s (1933kB/s-1977kB/s), io=444MiB (466MB), run=10001-10016msec 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 bdev_null0 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 [2024-10-08 18:39:43.495605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 bdev_null1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.629 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:26.630 { 00:32:26.630 "params": { 00:32:26.630 "name": "Nvme$subsystem", 00:32:26.630 "trtype": "$TEST_TRANSPORT", 00:32:26.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:26.630 "adrfam": "ipv4", 00:32:26.630 "trsvcid": "$NVMF_PORT", 00:32:26.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:26.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:26.630 "hdgst": ${hdgst:-false}, 00:32:26.630 "ddgst": ${ddgst:-false} 00:32:26.630 }, 00:32:26.630 "method": "bdev_nvme_attach_controller" 00:32:26.630 } 00:32:26.630 EOF 00:32:26.630 )") 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:26.630 { 00:32:26.630 "params": { 00:32:26.630 "name": "Nvme$subsystem", 00:32:26.630 "trtype": "$TEST_TRANSPORT", 00:32:26.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:26.630 "adrfam": "ipv4", 00:32:26.630 "trsvcid": "$NVMF_PORT", 00:32:26.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:26.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:26.630 "hdgst": ${hdgst:-false}, 00:32:26.630 "ddgst": ${ddgst:-false} 00:32:26.630 }, 00:32:26.630 "method": "bdev_nvme_attach_controller" 00:32:26.630 } 00:32:26.630 EOF 00:32:26.630 )") 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:26.630 "params": { 00:32:26.630 "name": "Nvme0", 00:32:26.630 "trtype": "tcp", 00:32:26.630 "traddr": "10.0.0.2", 00:32:26.630 "adrfam": "ipv4", 00:32:26.630 "trsvcid": "4420", 00:32:26.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:26.630 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:26.630 "hdgst": false, 00:32:26.630 "ddgst": false 00:32:26.630 }, 00:32:26.630 "method": "bdev_nvme_attach_controller" 00:32:26.630 },{ 00:32:26.630 "params": { 00:32:26.630 "name": "Nvme1", 00:32:26.630 "trtype": "tcp", 00:32:26.630 "traddr": "10.0.0.2", 00:32:26.630 "adrfam": "ipv4", 00:32:26.630 "trsvcid": "4420", 00:32:26.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:26.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:26.630 "hdgst": false, 00:32:26.630 "ddgst": false 00:32:26.630 }, 00:32:26.630 "method": "bdev_nvme_attach_controller" 00:32:26.630 }' 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:26.630 18:39:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:26.630 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:26.630 ... 00:32:26.630 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:26.630 ... 00:32:26.630 fio-3.35 00:32:26.630 Starting 4 threads 00:32:31.892 00:32:31.892 filename0: (groupid=0, jobs=1): err= 0: pid=1428369: Tue Oct 8 18:39:49 2024 00:32:31.892 read: IOPS=1949, BW=15.2MiB/s (16.0MB/s)(76.2MiB/5002msec) 00:32:31.892 slat (nsec): min=4297, max=61254, avg=16070.58, stdev=7477.32 00:32:31.892 clat (usec): min=738, max=7531, avg=4047.74, stdev=549.20 00:32:31.892 lat (usec): min=751, max=7551, avg=4063.81, stdev=551.10 00:32:31.892 clat percentiles (usec): 00:32:31.892 | 1.00th=[ 2343], 5.00th=[ 3130], 10.00th=[ 3392], 20.00th=[ 3654], 00:32:31.892 | 30.00th=[ 3884], 40.00th=[ 4080], 50.00th=[ 4178], 60.00th=[ 4228], 00:32:31.892 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:32:31.892 | 99.00th=[ 5669], 99.50th=[ 6063], 99.90th=[ 7046], 99.95th=[ 7308], 00:32:31.892 | 99.99th=[ 7504] 00:32:31.892 bw ( KiB/s): min=14592, max=17248, per=26.50%, avg=15672.89, stdev=955.33, samples=9 00:32:31.892 iops : min= 1824, max= 2156, avg=1959.11, stdev=119.42, samples=9 00:32:31.892 lat (usec) : 750=0.01%, 1000=0.02% 00:32:31.892 lat (msec) : 2=0.40%, 4=34.86%, 10=64.70% 00:32:31.892 cpu : usr=94.64%, sys=4.82%, ctx=11, majf=0, minf=0 00:32:31.892 IO depths : 1=0.4%, 2=15.0%, 4=58.2%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.892 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.892 issued rwts: total=9752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.892 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:31.892 filename0: (groupid=0, jobs=1): err= 0: pid=1428370: Tue Oct 8 18:39:49 2024 00:32:31.892 read: IOPS=1855, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5001msec) 00:32:31.892 slat (nsec): min=4442, max=62618, avg=16744.88, stdev=8831.51 00:32:31.892 clat (usec): min=830, max=7763, avg=4251.68, stdev=606.81 00:32:31.892 lat (usec): min=843, max=7771, avg=4268.42, stdev=606.84 00:32:31.892 clat percentiles (usec): 00:32:31.892 | 1.00th=[ 2409], 5.00th=[ 3458], 10.00th=[ 3687], 20.00th=[ 3982], 00:32:31.892 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:32:31.892 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5211], 00:32:31.892 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7504], 99.95th=[ 7635], 00:32:31.892 | 99.99th=[ 7767] 00:32:31.892 bw ( KiB/s): min=14544, max=15296, per=25.10%, avg=14844.44, stdev=260.43, samples=9 00:32:31.892 iops : min= 1818, max= 1912, avg=1855.56, stdev=32.55, samples=9 00:32:31.892 lat (usec) : 1000=0.05% 00:32:31.892 lat (msec) : 2=0.53%, 4=20.35%, 10=79.06% 00:32:31.892 cpu : usr=95.08%, sys=4.44%, ctx=10, majf=0, minf=9 00:32:31.892 IO depths : 1=0.3%, 2=13.4%, 4=58.8%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.892 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.892 issued rwts: total=9281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.893 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:31.893 filename1: (groupid=0, jobs=1): err= 0: pid=1428371: Tue Oct 8 18:39:49 2024 00:32:31.893 read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5001msec) 00:32:31.893 slat (usec): min=4, max=246, avg=16.40, stdev= 9.03 00:32:31.893 clat (usec): min=954, max=7786, avg=4256.62, stdev=559.51 00:32:31.893 lat (usec): min=966, max=7810, avg=4273.02, stdev=559.63 00:32:31.893 clat percentiles (usec): 00:32:31.893 | 1.00th=[ 2802], 5.00th=[ 3458], 10.00th=[ 3720], 20.00th=[ 3982], 00:32:31.893 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:32:31.893 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5211], 00:32:31.893 | 99.00th=[ 6259], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[ 7767], 00:32:31.893 | 99.99th=[ 7767] 00:32:31.893 bw ( KiB/s): min=14544, max=15248, per=25.08%, avg=14831.78, stdev=261.04, samples=9 00:32:31.893 iops : min= 1818, max= 1906, avg=1853.89, stdev=32.48, samples=9 00:32:31.893 lat (usec) : 1000=0.01% 00:32:31.893 lat (msec) : 2=0.27%, 4=20.04%, 10=79.68% 00:32:31.893 cpu : usr=95.12%, sys=4.38%, ctx=7, majf=0, minf=10 00:32:31.893 IO depths : 1=0.3%, 2=14.0%, 4=58.7%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.893 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.893 issued rwts: total=9272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.893 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:31.893 filename1: (groupid=0, jobs=1): err= 0: pid=1428372: Tue Oct 8 18:39:49 2024 00:32:31.893 read: IOPS=1777, BW=13.9MiB/s (14.6MB/s)(70.0MiB/5042msec) 00:32:31.893 slat (nsec): min=3991, max=62696, avg=17372.15, stdev=9128.88 00:32:31.893 clat (usec): min=775, max=45193, avg=4419.72, stdev=1018.58 00:32:31.893 lat (usec): min=788, max=45230, avg=4437.09, stdev=1017.85 00:32:31.893 clat percentiles (usec): 00:32:31.893 | 1.00th=[ 3032], 5.00th=[ 3720], 10.00th=[ 3949], 20.00th=[ 4113], 00:32:31.893 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:32:31.893 | 70.00th=[ 4424], 80.00th=[ 4621], 90.00th=[ 5145], 95.00th=[ 5538], 00:32:31.893 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[ 7504], 99.95th=[ 7898], 00:32:31.893 | 99.99th=[45351] 00:32:31.893 bw ( KiB/s): min=13680, max=14848, per=24.24%, avg=14331.20, stdev=483.60, samples=10 00:32:31.893 iops : min= 1710, max= 1856, avg=1791.40, stdev=60.45, samples=10 00:32:31.893 lat (usec) : 1000=0.08% 00:32:31.893 lat (msec) : 2=0.35%, 4=11.92%, 10=87.61%, 50=0.04% 00:32:31.893 cpu : usr=94.05%, sys=5.44%, ctx=6, majf=0, minf=9 00:32:31.893 IO depths : 1=0.3%, 2=13.1%, 4=59.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.893 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.893 issued rwts: total=8961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.893 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:31.893 00:32:31.893 Run status group 0 (all jobs): 00:32:31.893 READ: bw=57.7MiB/s (60.5MB/s), 13.9MiB/s-15.2MiB/s (14.6MB/s-16.0MB/s), io=291MiB (305MB), run=5001-5042msec 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.893 00:32:31.893 real 0m24.394s 00:32:31.893 user 4m32.868s 00:32:31.893 sys 0m6.403s 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:31.893 18:39:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:31.893 ************************************ 00:32:31.893 END TEST fio_dif_rand_params 00:32:31.893 ************************************ 00:32:31.893 18:39:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:31.893 18:39:49 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:31.893 18:39:49 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:31.893 18:39:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:31.893 ************************************ 00:32:31.893 START TEST fio_dif_digest 00:32:31.893 ************************************ 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:31.893 bdev_null0 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:31.893 [2024-10-08 18:39:50.036626] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:31.893 { 00:32:31.893 "params": { 00:32:31.893 "name": "Nvme$subsystem", 00:32:31.893 "trtype": "$TEST_TRANSPORT", 00:32:31.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.893 "adrfam": "ipv4", 00:32:31.893 "trsvcid": "$NVMF_PORT", 00:32:31.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.893 "hdgst": ${hdgst:-false}, 00:32:31.893 "ddgst": ${ddgst:-false} 00:32:31.893 }, 00:32:31.893 "method": "bdev_nvme_attach_controller" 00:32:31.893 } 00:32:31.893 EOF 00:32:31.893 )") 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:31.893 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:31.894 "params": { 00:32:31.894 "name": "Nvme0", 00:32:31.894 "trtype": "tcp", 00:32:31.894 "traddr": "10.0.0.2", 00:32:31.894 "adrfam": "ipv4", 00:32:31.894 "trsvcid": "4420", 00:32:31.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.894 "hdgst": true, 00:32:31.894 "ddgst": true 00:32:31.894 }, 00:32:31.894 "method": "bdev_nvme_attach_controller" 00:32:31.894 }' 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:31.894 18:39:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:32.152 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:32.152 ... 00:32:32.152 fio-3.35 00:32:32.152 Starting 3 threads 00:32:44.344 00:32:44.344 filename0: (groupid=0, jobs=1): err= 0: pid=1429243: Tue Oct 8 18:40:00 2024 00:32:44.344 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10045msec) 00:32:44.344 slat (nsec): min=7920, max=92643, avg=13561.79, stdev=2956.65 00:32:44.344 clat (usec): min=11669, max=52814, avg=14784.20, stdev=1495.51 00:32:44.344 lat (usec): min=11682, max=52833, avg=14797.77, stdev=1495.58 00:32:44.344 clat percentiles (usec): 00:32:44.344 | 1.00th=[12649], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:32:44.344 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:32:44.344 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16450], 00:32:44.344 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18482], 99.95th=[50594], 00:32:44.344 | 99.99th=[52691] 00:32:44.344 bw ( KiB/s): min=25344, max=27136, per=32.88%, avg=25996.80, stdev=402.42, samples=20 00:32:44.344 iops : min= 198, max= 212, avg=203.10, stdev= 3.14, samples=20 00:32:44.344 lat (msec) : 20=99.90%, 100=0.10% 00:32:44.344 cpu : usr=93.60%, sys=5.91%, ctx=27, majf=0, minf=231 00:32:44.344 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.344 issued rwts: total=2033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.344 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.344 filename0: (groupid=0, jobs=1): err= 0: pid=1429244: Tue Oct 8 18:40:00 2024 00:32:44.344 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10045msec) 00:32:44.344 slat (nsec): min=7909, max=37992, avg=13647.95, stdev=2442.45 00:32:44.344 clat (usec): min=11430, max=54421, avg=14777.35, stdev=1450.05 00:32:44.344 lat (usec): min=11443, max=54434, avg=14791.00, stdev=1450.08 00:32:44.344 clat percentiles (usec): 00:32:44.344 | 1.00th=[12518], 5.00th=[13173], 10.00th=[13566], 20.00th=[13960], 00:32:44.344 | 30.00th=[14353], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:32:44.344 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16450], 00:32:44.344 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[44827], 00:32:44.344 | 99.99th=[54264] 00:32:44.344 bw ( KiB/s): min=25600, max=26624, per=32.90%, avg=26009.60, stdev=325.94, samples=20 00:32:44.344 iops : min= 200, max= 208, avg=203.20, stdev= 2.55, samples=20 00:32:44.344 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:32:44.344 cpu : usr=93.28%, sys=6.23%, ctx=20, majf=0, minf=124 00:32:44.344 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.344 issued rwts: total=2034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.344 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.344 filename0: (groupid=0, jobs=1): err= 0: pid=1429245: Tue Oct 8 18:40:00 2024 00:32:44.344 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(267MiB/10047msec) 00:32:44.344 slat (nsec): min=4334, max=77517, avg=14738.03, stdev=4327.24 00:32:44.344 clat (usec): min=11045, max=52356, avg=14053.32, stdev=1503.27 00:32:44.344 lat (usec): min=11059, max=52369, avg=14068.06, stdev=1503.11 00:32:44.344 clat percentiles (usec): 00:32:44.344 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:32:44.344 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:32:44.344 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15270], 95.00th=[15664], 00:32:44.344 | 99.00th=[16581], 99.50th=[17171], 99.90th=[20055], 99.95th=[49546], 00:32:44.344 | 99.99th=[52167] 00:32:44.344 bw ( KiB/s): min=26368, max=27904, per=34.58%, avg=27340.80, stdev=376.97, samples=20 00:32:44.344 iops : min= 206, max= 218, avg=213.60, stdev= 2.95, samples=20 00:32:44.344 lat (msec) : 20=99.86%, 50=0.09%, 100=0.05% 00:32:44.344 cpu : usr=93.65%, sys=5.85%, ctx=18, majf=0, minf=141 00:32:44.344 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.344 issued rwts: total=2139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.344 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.344 00:32:44.344 Run status group 0 (all jobs): 00:32:44.344 READ: bw=77.2MiB/s (81.0MB/s), 25.3MiB/s-26.6MiB/s (26.5MB/s-27.9MB/s), io=776MiB (813MB), run=10045-10047msec 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.344 00:32:44.344 real 0m11.219s 00:32:44.344 user 0m29.310s 00:32:44.344 sys 0m2.082s 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:44.344 18:40:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:44.344 ************************************ 00:32:44.344 END TEST fio_dif_digest 00:32:44.344 ************************************ 00:32:44.344 18:40:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:44.344 18:40:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:44.344 rmmod nvme_tcp 00:32:44.344 rmmod nvme_fabrics 00:32:44.344 rmmod nvme_keyring 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1422687 ']' 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1422687 00:32:44.344 18:40:01 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1422687 ']' 00:32:44.344 18:40:01 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1422687 00:32:44.344 18:40:01 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:32:44.344 18:40:01 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:44.344 18:40:01 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1422687 00:32:44.344 18:40:01 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:44.344 18:40:01 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:44.344 18:40:01 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1422687' 00:32:44.344 killing process with pid 1422687 00:32:44.344 18:40:01 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1422687 00:32:44.344 18:40:01 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1422687 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:32:44.344 18:40:01 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:44.603 Waiting for block devices as requested 00:32:44.603 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:44.862 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:44.862 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:44.862 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:45.121 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:45.121 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:45.121 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:45.380 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:45.380 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:45.380 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:45.639 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:45.639 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:45.639 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:45.639 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:45.898 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:45.898 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:45.898 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:46.158 18:40:04 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:46.158 18:40:04 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:46.158 18:40:04 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:32:46.158 18:40:04 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:32:46.158 18:40:04 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:46.158 18:40:04 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:32:46.158 18:40:04 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:46.158 18:40:04 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:46.158 18:40:04 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.158 18:40:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:46.158 18:40:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.060 18:40:06 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:48.060 00:32:48.060 real 1m7.248s 00:32:48.060 user 6m30.232s 00:32:48.060 sys 0m17.778s 00:32:48.060 18:40:06 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:48.060 18:40:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:48.060 ************************************ 00:32:48.060 END TEST nvmf_dif 00:32:48.060 ************************************ 00:32:48.060 18:40:06 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:48.060 18:40:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:48.060 18:40:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:48.060 18:40:06 -- common/autotest_common.sh@10 -- # set +x 00:32:48.060 ************************************ 00:32:48.060 START TEST nvmf_abort_qd_sizes 00:32:48.060 ************************************ 00:32:48.060 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:48.060 * Looking for test storage... 00:32:48.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:32:48.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.319 --rc genhtml_branch_coverage=1 00:32:48.319 --rc genhtml_function_coverage=1 00:32:48.319 --rc genhtml_legend=1 00:32:48.319 --rc geninfo_all_blocks=1 00:32:48.319 --rc geninfo_unexecuted_blocks=1 00:32:48.319 00:32:48.319 ' 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:32:48.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.319 --rc genhtml_branch_coverage=1 00:32:48.319 --rc genhtml_function_coverage=1 00:32:48.319 --rc genhtml_legend=1 00:32:48.319 --rc geninfo_all_blocks=1 00:32:48.319 --rc geninfo_unexecuted_blocks=1 00:32:48.319 00:32:48.319 ' 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:32:48.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.319 --rc genhtml_branch_coverage=1 00:32:48.319 --rc genhtml_function_coverage=1 00:32:48.319 --rc genhtml_legend=1 00:32:48.319 --rc geninfo_all_blocks=1 00:32:48.319 --rc geninfo_unexecuted_blocks=1 00:32:48.319 00:32:48.319 ' 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:32:48.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.319 --rc genhtml_branch_coverage=1 00:32:48.319 --rc genhtml_function_coverage=1 00:32:48.319 --rc genhtml_legend=1 00:32:48.319 --rc geninfo_all_blocks=1 00:32:48.319 --rc geninfo_unexecuted_blocks=1 00:32:48.319 00:32:48.319 ' 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.319 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:48.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:32:48.320 18:40:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:50.219 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:50.219 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:50.219 Found net devices under 0000:09:00.0: cvl_0_0 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:50.219 Found net devices under 0000:09:00.1: cvl_0_1 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.219 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:32:50.477 00:32:50.477 --- 10.0.0.2 ping statistics --- 00:32:50.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.477 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:32:50.477 00:32:50.477 --- 10.0.0.1 ping statistics --- 00:32:50.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.477 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:32:50.477 18:40:08 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:51.850 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:51.850 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:51.850 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:51.850 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:51.850 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:51.850 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:51.850 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:51.850 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:51.850 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:51.850 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:51.850 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:51.850 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:51.850 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:51.850 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:51.850 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:51.850 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:52.785 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:52.785 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.785 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:52.785 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:52.785 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.785 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:52.785 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1434153 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1434153 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1434153 ']' 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:53.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:53.042 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:53.042 [2024-10-08 18:40:11.136796] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:32:53.042 [2024-10-08 18:40:11.136880] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:53.042 [2024-10-08 18:40:11.204353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:53.042 [2024-10-08 18:40:11.317037] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:53.042 [2024-10-08 18:40:11.317115] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:53.042 [2024-10-08 18:40:11.317129] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:53.042 [2024-10-08 18:40:11.317140] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:53.042 [2024-10-08 18:40:11.317150] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:53.042 [2024-10-08 18:40:11.318857] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:32:53.042 [2024-10-08 18:40:11.318919] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:32:53.042 [2024-10-08 18:40:11.318987] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.042 [2024-10-08 18:40:11.318990] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:32:53.299 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:53.300 18:40:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:53.300 ************************************ 00:32:53.300 START TEST spdk_target_abort 00:32:53.300 ************************************ 00:32:53.300 18:40:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:32:53.300 18:40:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:53.300 18:40:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:32:53.300 18:40:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.300 18:40:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:56.576 spdk_targetn1 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:56.576 [2024-10-08 18:40:14.340206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:56.576 [2024-10-08 18:40:14.372495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:56.576 18:40:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:59.853 Initializing NVMe Controllers 00:32:59.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:59.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:59.853 Initialization complete. Launching workers. 00:32:59.853 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11576, failed: 0 00:32:59.853 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1273, failed to submit 10303 00:32:59.853 success 712, unsuccessful 561, failed 0 00:32:59.853 18:40:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:59.854 18:40:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:03.130 Initializing NVMe Controllers 00:33:03.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:03.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:03.130 Initialization complete. Launching workers. 00:33:03.130 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8936, failed: 0 00:33:03.130 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 7696 00:33:03.130 success 334, unsuccessful 906, failed 0 00:33:03.130 18:40:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:03.130 18:40:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:06.406 Initializing NVMe Controllers 00:33:06.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:33:06.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:06.406 Initialization complete. Launching workers. 00:33:06.406 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30621, failed: 0 00:33:06.406 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2493, failed to submit 28128 00:33:06.406 success 524, unsuccessful 1969, failed 0 00:33:06.406 18:40:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:33:06.406 18:40:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.406 18:40:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:06.406 18:40:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:06.406 18:40:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:06.406 18:40:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:06.406 18:40:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1434153 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1434153 ']' 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1434153 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1434153 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1434153' 00:33:07.337 killing process with pid 1434153 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1434153 00:33:07.337 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1434153 00:33:07.594 00:33:07.594 real 0m14.347s 00:33:07.594 user 0m54.184s 00:33:07.594 sys 0m2.740s 00:33:07.594 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:07.594 18:40:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:07.594 ************************************ 00:33:07.594 END TEST spdk_target_abort 00:33:07.594 ************************************ 00:33:07.594 18:40:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:33:07.594 18:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:07.594 18:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:07.594 18:40:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:07.852 ************************************ 00:33:07.852 START TEST kernel_target_abort 00:33:07.852 ************************************ 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:07.852 18:40:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:08.782 Waiting for block devices as requested 00:33:09.041 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:09.041 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:09.041 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:09.299 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:09.299 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:09.299 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:09.299 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:09.557 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:09.557 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:09.557 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:09.815 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:09.815 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:09.815 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:10.073 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:10.073 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:10.073 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:10.073 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:10.332 No valid GPT data, bailing 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:33:10.332 00:33:10.332 Discovery Log Number of Records 2, Generation counter 2 00:33:10.332 =====Discovery Log Entry 0====== 00:33:10.332 trtype: tcp 00:33:10.332 adrfam: ipv4 00:33:10.332 subtype: current discovery subsystem 00:33:10.332 treq: not specified, sq flow control disable supported 00:33:10.332 portid: 1 00:33:10.332 trsvcid: 4420 00:33:10.332 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:10.332 traddr: 10.0.0.1 00:33:10.332 eflags: none 00:33:10.332 sectype: none 00:33:10.332 =====Discovery Log Entry 1====== 00:33:10.332 trtype: tcp 00:33:10.332 adrfam: ipv4 00:33:10.332 subtype: nvme subsystem 00:33:10.332 treq: not specified, sq flow control disable supported 00:33:10.332 portid: 1 00:33:10.332 trsvcid: 4420 00:33:10.332 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:10.332 traddr: 10.0.0.1 00:33:10.332 eflags: none 00:33:10.332 sectype: none 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:10.332 18:40:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:13.608 Initializing NVMe Controllers 00:33:13.608 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:13.608 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:13.608 Initialization complete. Launching workers. 00:33:13.608 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 47492, failed: 0 00:33:13.608 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 47492, failed to submit 0 00:33:13.608 success 0, unsuccessful 47492, failed 0 00:33:13.608 18:40:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:13.608 18:40:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:16.935 Initializing NVMe Controllers 00:33:16.936 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:16.936 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:16.936 Initialization complete. Launching workers. 00:33:16.936 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93985, failed: 0 00:33:16.936 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21086, failed to submit 72899 00:33:16.936 success 0, unsuccessful 21086, failed 0 00:33:16.936 18:40:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:16.936 18:40:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:20.212 Initializing NVMe Controllers 00:33:20.212 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:20.212 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:20.212 Initialization complete. Launching workers. 00:33:20.212 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87390, failed: 0 00:33:20.212 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21830, failed to submit 65560 00:33:20.212 success 0, unsuccessful 21830, failed 0 00:33:20.212 18:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:20.212 18:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:20.212 18:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:33:20.212 18:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:20.212 18:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:20.212 18:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:20.212 18:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:20.212 18:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:33:20.212 18:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:33:20.212 18:40:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:21.144 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:21.144 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:21.144 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:21.144 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:21.144 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:21.144 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:21.144 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:21.144 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:21.144 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:21.144 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:21.144 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:21.144 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:21.144 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:21.144 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:21.144 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:21.144 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:22.082 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:33:22.082 00:33:22.082 real 0m14.415s 00:33:22.082 user 0m6.096s 00:33:22.082 sys 0m3.490s 00:33:22.082 18:40:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:22.082 18:40:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:22.082 ************************************ 00:33:22.082 END TEST kernel_target_abort 00:33:22.082 ************************************ 00:33:22.082 18:40:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:22.082 18:40:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:22.082 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:22.082 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:33:22.082 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.082 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:33:22.082 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.082 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.082 rmmod nvme_tcp 00:33:22.082 rmmod nvme_fabrics 00:33:22.082 rmmod nvme_keyring 00:33:22.341 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.341 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:33:22.341 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:33:22.341 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1434153 ']' 00:33:22.341 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1434153 00:33:22.341 18:40:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1434153 ']' 00:33:22.341 18:40:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1434153 00:33:22.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1434153) - No such process 00:33:22.341 18:40:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1434153 is not found' 00:33:22.341 Process with pid 1434153 is not found 00:33:22.341 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:33:22.341 18:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:23.274 Waiting for block devices as requested 00:33:23.274 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:23.274 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:23.532 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:23.532 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:23.532 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:23.532 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:23.789 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:23.789 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:23.789 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:33:24.049 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:24.049 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:24.049 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:24.308 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:24.308 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:24.308 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:24.567 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:24.567 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:24.567 18:40:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.101 18:40:44 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.101 00:33:27.101 real 0m38.541s 00:33:27.101 user 1m2.523s 00:33:27.101 sys 0m9.768s 00:33:27.101 18:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:27.101 18:40:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:27.101 ************************************ 00:33:27.101 END TEST nvmf_abort_qd_sizes 00:33:27.101 ************************************ 00:33:27.101 18:40:44 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:27.102 18:40:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:27.102 18:40:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:27.102 18:40:44 -- common/autotest_common.sh@10 -- # set +x 00:33:27.102 ************************************ 00:33:27.102 START TEST keyring_file 00:33:27.102 ************************************ 00:33:27.102 18:40:44 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:27.102 * Looking for test storage... 00:33:27.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:27.102 18:40:44 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:27.102 18:40:44 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:33:27.102 18:40:44 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:27.102 18:40:45 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@345 -- # : 1 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@353 -- # local d=1 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@355 -- # echo 1 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@353 -- # local d=2 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@355 -- # echo 2 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@368 -- # return 0 00:33:27.102 18:40:45 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.102 18:40:45 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:27.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.102 --rc genhtml_branch_coverage=1 00:33:27.102 --rc genhtml_function_coverage=1 00:33:27.102 --rc genhtml_legend=1 00:33:27.102 --rc geninfo_all_blocks=1 00:33:27.102 --rc geninfo_unexecuted_blocks=1 00:33:27.102 00:33:27.102 ' 00:33:27.102 18:40:45 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:27.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.102 --rc genhtml_branch_coverage=1 00:33:27.102 --rc genhtml_function_coverage=1 00:33:27.102 --rc genhtml_legend=1 00:33:27.102 --rc geninfo_all_blocks=1 00:33:27.102 --rc geninfo_unexecuted_blocks=1 00:33:27.102 00:33:27.102 ' 00:33:27.102 18:40:45 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:27.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.102 --rc genhtml_branch_coverage=1 00:33:27.102 --rc genhtml_function_coverage=1 00:33:27.102 --rc genhtml_legend=1 00:33:27.102 --rc geninfo_all_blocks=1 00:33:27.102 --rc geninfo_unexecuted_blocks=1 00:33:27.102 00:33:27.102 ' 00:33:27.102 18:40:45 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:27.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.102 --rc genhtml_branch_coverage=1 00:33:27.102 --rc genhtml_function_coverage=1 00:33:27.102 --rc genhtml_legend=1 00:33:27.102 --rc geninfo_all_blocks=1 00:33:27.102 --rc geninfo_unexecuted_blocks=1 00:33:27.102 00:33:27.102 ' 00:33:27.102 18:40:45 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:27.102 18:40:45 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.102 18:40:45 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.102 18:40:45 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.102 18:40:45 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.102 18:40:45 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.102 18:40:45 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:27.102 18:40:45 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@51 -- # : 0 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:27.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.102 18:40:45 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.102 18:40:45 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:27.102 18:40:45 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:27.102 18:40:45 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:27.102 18:40:45 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:27.102 18:40:45 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:27.102 18:40:45 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:27.102 18:40:45 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:27.102 18:40:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:27.102 18:40:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:27.102 18:40:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.kUczdEj7Jm 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@731 -- # python - 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kUczdEj7Jm 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.kUczdEj7Jm 00:33:27.103 18:40:45 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.kUczdEj7Jm 00:33:27.103 18:40:45 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZFekpjKyyk 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:33:27.103 18:40:45 keyring_file -- nvmf/common.sh@731 -- # python - 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZFekpjKyyk 00:33:27.103 18:40:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZFekpjKyyk 00:33:27.103 18:40:45 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ZFekpjKyyk 00:33:27.103 18:40:45 keyring_file -- keyring/file.sh@30 -- # tgtpid=1439954 00:33:27.103 18:40:45 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:27.103 18:40:45 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1439954 00:33:27.103 18:40:45 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1439954 ']' 00:33:27.103 18:40:45 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.103 18:40:45 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:27.103 18:40:45 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.103 18:40:45 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:27.103 18:40:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.103 [2024-10-08 18:40:45.169519] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:33:27.103 [2024-10-08 18:40:45.169627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439954 ] 00:33:27.103 [2024-10-08 18:40:45.228461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.103 [2024-10-08 18:40:45.342411] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:27.360 18:40:45 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:27.360 18:40:45 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:27.360 18:40:45 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:27.360 18:40:45 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.360 18:40:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.360 [2024-10-08 18:40:45.625920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.360 null0 00:33:27.618 [2024-10-08 18:40:45.657976] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:27.618 [2024-10-08 18:40:45.658467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:27.618 18:40:45 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.618 [2024-10-08 18:40:45.686028] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:27.618 request: 00:33:27.618 { 00:33:27.618 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:27.618 "secure_channel": false, 00:33:27.618 "listen_address": { 00:33:27.618 "trtype": "tcp", 00:33:27.618 "traddr": "127.0.0.1", 00:33:27.618 "trsvcid": "4420" 00:33:27.618 }, 00:33:27.618 "method": "nvmf_subsystem_add_listener", 00:33:27.618 "req_id": 1 00:33:27.618 } 00:33:27.618 Got JSON-RPC error response 00:33:27.618 response: 00:33:27.618 { 00:33:27.618 "code": -32602, 00:33:27.618 "message": "Invalid parameters" 00:33:27.618 } 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:27.618 18:40:45 keyring_file -- keyring/file.sh@47 -- # bperfpid=1439964 00:33:27.618 18:40:45 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1439964 /var/tmp/bperf.sock 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1439964 ']' 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:27.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:27.618 18:40:45 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:27.618 18:40:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.618 [2024-10-08 18:40:45.738386] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:33:27.618 [2024-10-08 18:40:45.738476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439964 ] 00:33:27.618 [2024-10-08 18:40:45.797148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.876 [2024-10-08 18:40:45.909325] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.876 18:40:46 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:27.876 18:40:46 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:27.876 18:40:46 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kUczdEj7Jm 00:33:27.876 18:40:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kUczdEj7Jm 00:33:28.133 18:40:46 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZFekpjKyyk 00:33:28.133 18:40:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZFekpjKyyk 00:33:28.391 18:40:46 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:33:28.391 18:40:46 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:28.391 18:40:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.391 18:40:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:28.391 18:40:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.648 18:40:46 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.kUczdEj7Jm == \/\t\m\p\/\t\m\p\.\k\U\c\z\d\E\j\7\J\m ]] 00:33:28.648 18:40:46 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:33:28.648 18:40:46 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:33:28.648 18:40:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.648 18:40:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:28.648 18:40:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.905 18:40:47 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ZFekpjKyyk == \/\t\m\p\/\t\m\p\.\Z\F\e\k\p\j\K\y\y\k ]] 00:33:28.905 18:40:47 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:33:28.905 18:40:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:28.905 18:40:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:28.905 18:40:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.905 18:40:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.905 18:40:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:29.162 18:40:47 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:29.162 18:40:47 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:33:29.162 18:40:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:29.162 18:40:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:29.162 18:40:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:29.162 18:40:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.162 18:40:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:29.419 18:40:47 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:33:29.419 18:40:47 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:29.419 18:40:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:29.676 [2024-10-08 18:40:47.889255] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:29.676 nvme0n1 00:33:29.933 18:40:47 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:33:29.933 18:40:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:29.933 18:40:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:29.933 18:40:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:29.933 18:40:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.933 18:40:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:30.191 18:40:48 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:33:30.191 18:40:48 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:33:30.191 18:40:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:30.191 18:40:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:30.191 18:40:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.191 18:40:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.191 18:40:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:30.449 18:40:48 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:33:30.449 18:40:48 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:30.449 Running I/O for 1 seconds... 00:33:31.380 10247.00 IOPS, 40.03 MiB/s 00:33:31.380 Latency(us) 00:33:31.380 [2024-10-08T16:40:49.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.380 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:31.380 nvme0n1 : 1.01 10292.98 40.21 0.00 0.00 12395.74 5072.97 19223.89 00:33:31.380 [2024-10-08T16:40:49.667Z] =================================================================================================================== 00:33:31.380 [2024-10-08T16:40:49.667Z] Total : 10292.98 40.21 0.00 0.00 12395.74 5072.97 19223.89 00:33:31.380 { 00:33:31.380 "results": [ 00:33:31.380 { 00:33:31.380 "job": "nvme0n1", 00:33:31.380 "core_mask": "0x2", 00:33:31.380 "workload": "randrw", 00:33:31.380 "percentage": 50, 00:33:31.380 "status": "finished", 00:33:31.380 "queue_depth": 128, 00:33:31.380 "io_size": 4096, 00:33:31.380 "runtime": 1.008066, 00:33:31.380 "iops": 10292.976848738079, 00:33:31.380 "mibps": 40.20694081538312, 00:33:31.380 "io_failed": 0, 00:33:31.380 "io_timeout": 0, 00:33:31.380 "avg_latency_us": 12395.743549501698, 00:33:31.380 "min_latency_us": 5072.971851851852, 00:33:31.380 "max_latency_us": 19223.893333333333 00:33:31.380 } 00:33:31.380 ], 00:33:31.380 "core_count": 1 00:33:31.380 } 00:33:31.380 18:40:49 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:31.380 18:40:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:31.637 18:40:49 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:33:31.637 18:40:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:31.637 18:40:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:31.894 18:40:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:31.894 18:40:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.894 18:40:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:32.151 18:40:50 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:32.151 18:40:50 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:33:32.151 18:40:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:32.151 18:40:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:32.151 18:40:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.151 18:40:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.151 18:40:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:32.409 18:40:50 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:33:32.409 18:40:50 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:32.409 18:40:50 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:32.409 18:40:50 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:32.409 18:40:50 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:32.409 18:40:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.409 18:40:50 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:32.409 18:40:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:32.409 18:40:50 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:32.409 18:40:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:32.666 [2024-10-08 18:40:50.747161] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:32.666 [2024-10-08 18:40:50.747780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14653b0 (107): Transport endpoint is not connected 00:33:32.666 [2024-10-08 18:40:50.748769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14653b0 (9): Bad file descriptor 00:33:32.666 [2024-10-08 18:40:50.749768] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:32.667 [2024-10-08 18:40:50.749787] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:32.667 [2024-10-08 18:40:50.749801] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:32.667 [2024-10-08 18:40:50.749814] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:32.667 request: 00:33:32.667 { 00:33:32.667 "name": "nvme0", 00:33:32.667 "trtype": "tcp", 00:33:32.667 "traddr": "127.0.0.1", 00:33:32.667 "adrfam": "ipv4", 00:33:32.667 "trsvcid": "4420", 00:33:32.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:32.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:32.667 "prchk_reftag": false, 00:33:32.667 "prchk_guard": false, 00:33:32.667 "hdgst": false, 00:33:32.667 "ddgst": false, 00:33:32.667 "psk": "key1", 00:33:32.667 "allow_unrecognized_csi": false, 00:33:32.667 "method": "bdev_nvme_attach_controller", 00:33:32.667 "req_id": 1 00:33:32.667 } 00:33:32.667 Got JSON-RPC error response 00:33:32.667 response: 00:33:32.667 { 00:33:32.667 "code": -5, 00:33:32.667 "message": "Input/output error" 00:33:32.667 } 00:33:32.667 18:40:50 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:32.667 18:40:50 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:32.667 18:40:50 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:32.667 18:40:50 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:32.667 18:40:50 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:33:32.667 18:40:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:32.667 18:40:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:32.667 18:40:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.667 18:40:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:32.667 18:40:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.924 18:40:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:32.924 18:40:51 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:33:32.924 18:40:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:32.924 18:40:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:32.924 18:40:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:32.924 18:40:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:32.924 18:40:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:33.181 18:40:51 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:33:33.181 18:40:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:33:33.181 18:40:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:33.438 18:40:51 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:33:33.439 18:40:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:33.696 18:40:51 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:33:33.696 18:40:51 keyring_file -- keyring/file.sh@78 -- # jq length 00:33:33.696 18:40:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:33.953 18:40:52 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:33:33.953 18:40:52 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.kUczdEj7Jm 00:33:33.953 18:40:52 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.kUczdEj7Jm 00:33:33.953 18:40:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:33.953 18:40:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.kUczdEj7Jm 00:33:33.953 18:40:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:33.953 18:40:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:33.953 18:40:52 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:33.953 18:40:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:33.953 18:40:52 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kUczdEj7Jm 00:33:33.953 18:40:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kUczdEj7Jm 00:33:34.210 [2024-10-08 18:40:52.375665] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.kUczdEj7Jm': 0100660 00:33:34.210 [2024-10-08 18:40:52.375700] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:34.210 request: 00:33:34.210 { 00:33:34.210 "name": "key0", 00:33:34.210 "path": "/tmp/tmp.kUczdEj7Jm", 00:33:34.210 "method": "keyring_file_add_key", 00:33:34.210 "req_id": 1 00:33:34.210 } 00:33:34.210 Got JSON-RPC error response 00:33:34.210 response: 00:33:34.210 { 00:33:34.210 "code": -1, 00:33:34.210 "message": "Operation not permitted" 00:33:34.210 } 00:33:34.210 18:40:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:34.210 18:40:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:34.210 18:40:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:34.210 18:40:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:34.210 18:40:52 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.kUczdEj7Jm 00:33:34.210 18:40:52 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kUczdEj7Jm 00:33:34.210 18:40:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kUczdEj7Jm 00:33:34.467 18:40:52 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.kUczdEj7Jm 00:33:34.467 18:40:52 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:33:34.467 18:40:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:34.467 18:40:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:34.467 18:40:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:34.467 18:40:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:34.467 18:40:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:34.725 18:40:52 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:33:34.725 18:40:52 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.725 18:40:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:33:34.725 18:40:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.725 18:40:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:34.725 18:40:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:34.725 18:40:52 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:34.725 18:40:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:34.725 18:40:52 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.725 18:40:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:34.982 [2024-10-08 18:40:53.201888] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.kUczdEj7Jm': No such file or directory 00:33:34.982 [2024-10-08 18:40:53.201924] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:34.982 [2024-10-08 18:40:53.201953] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:34.982 [2024-10-08 18:40:53.201965] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:33:34.982 [2024-10-08 18:40:53.201977] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:34.982 [2024-10-08 18:40:53.201987] bdev_nvme.c:6541:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:34.982 request: 00:33:34.982 { 00:33:34.982 "name": "nvme0", 00:33:34.982 "trtype": "tcp", 00:33:34.982 "traddr": "127.0.0.1", 00:33:34.982 "adrfam": "ipv4", 00:33:34.982 "trsvcid": "4420", 00:33:34.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.982 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:34.982 "prchk_reftag": false, 00:33:34.982 "prchk_guard": false, 00:33:34.982 "hdgst": false, 00:33:34.982 "ddgst": false, 00:33:34.982 "psk": "key0", 00:33:34.982 "allow_unrecognized_csi": false, 00:33:34.982 "method": "bdev_nvme_attach_controller", 00:33:34.982 "req_id": 1 00:33:34.982 } 00:33:34.982 Got JSON-RPC error response 00:33:34.982 response: 00:33:34.982 { 00:33:34.982 "code": -19, 00:33:34.982 "message": "No such device" 00:33:34.982 } 00:33:34.982 18:40:53 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:33:34.982 18:40:53 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:34.982 18:40:53 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:34.982 18:40:53 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:34.982 18:40:53 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:33:34.982 18:40:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:35.240 18:40:53 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:35.240 18:40:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:35.240 18:40:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:35.240 18:40:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:35.240 18:40:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:35.240 18:40:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:35.240 18:40:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DPg6x2Rv8M 00:33:35.240 18:40:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:35.240 18:40:53 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:35.240 18:40:53 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:33:35.240 18:40:53 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:33:35.240 18:40:53 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:33:35.240 18:40:53 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:33:35.240 18:40:53 keyring_file -- nvmf/common.sh@731 -- # python - 00:33:35.497 18:40:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DPg6x2Rv8M 00:33:35.497 18:40:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DPg6x2Rv8M 00:33:35.497 18:40:53 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.DPg6x2Rv8M 00:33:35.497 18:40:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DPg6x2Rv8M 00:33:35.497 18:40:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DPg6x2Rv8M 00:33:35.755 18:40:53 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:35.755 18:40:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:36.012 nvme0n1 00:33:36.012 18:40:54 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:33:36.012 18:40:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:36.012 18:40:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.012 18:40:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.012 18:40:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.012 18:40:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:36.269 18:40:54 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:33:36.269 18:40:54 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:33:36.269 18:40:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:36.527 18:40:54 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:33:36.527 18:40:54 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:33:36.527 18:40:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.527 18:40:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:36.527 18:40:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.784 18:40:54 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:33:36.784 18:40:54 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:33:36.784 18:40:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:36.784 18:40:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:36.784 18:40:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:36.784 18:40:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:36.784 18:40:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:37.041 18:40:55 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:33:37.041 18:40:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:37.041 18:40:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:37.299 18:40:55 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:33:37.299 18:40:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:37.299 18:40:55 keyring_file -- keyring/file.sh@105 -- # jq length 00:33:37.556 18:40:55 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:33:37.556 18:40:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DPg6x2Rv8M 00:33:37.556 18:40:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DPg6x2Rv8M 00:33:37.814 18:40:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZFekpjKyyk 00:33:37.814 18:40:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZFekpjKyyk 00:33:38.071 18:40:56 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:38.071 18:40:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:38.636 nvme0n1 00:33:38.636 18:40:56 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:33:38.636 18:40:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:38.894 18:40:57 keyring_file -- keyring/file.sh@113 -- # config='{ 00:33:38.894 "subsystems": [ 00:33:38.894 { 00:33:38.894 "subsystem": "keyring", 00:33:38.894 "config": [ 00:33:38.894 { 00:33:38.894 "method": "keyring_file_add_key", 00:33:38.894 "params": { 00:33:38.894 "name": "key0", 00:33:38.894 "path": "/tmp/tmp.DPg6x2Rv8M" 00:33:38.894 } 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "method": "keyring_file_add_key", 00:33:38.894 "params": { 00:33:38.894 "name": "key1", 00:33:38.894 "path": "/tmp/tmp.ZFekpjKyyk" 00:33:38.894 } 00:33:38.894 } 00:33:38.894 ] 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "subsystem": "iobuf", 00:33:38.894 "config": [ 00:33:38.894 { 00:33:38.894 "method": "iobuf_set_options", 00:33:38.894 "params": { 00:33:38.894 "small_pool_count": 8192, 00:33:38.894 "large_pool_count": 1024, 00:33:38.894 "small_bufsize": 8192, 00:33:38.894 "large_bufsize": 135168 00:33:38.894 } 00:33:38.894 } 00:33:38.894 ] 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "subsystem": "sock", 00:33:38.894 "config": [ 00:33:38.894 { 00:33:38.894 "method": "sock_set_default_impl", 00:33:38.894 "params": { 00:33:38.894 "impl_name": "posix" 00:33:38.894 } 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "method": "sock_impl_set_options", 00:33:38.894 "params": { 00:33:38.894 "impl_name": "ssl", 00:33:38.894 "recv_buf_size": 4096, 00:33:38.894 "send_buf_size": 4096, 00:33:38.894 "enable_recv_pipe": true, 00:33:38.894 "enable_quickack": false, 00:33:38.894 "enable_placement_id": 0, 00:33:38.894 "enable_zerocopy_send_server": true, 00:33:38.894 "enable_zerocopy_send_client": false, 00:33:38.894 "zerocopy_threshold": 0, 00:33:38.894 "tls_version": 0, 00:33:38.894 "enable_ktls": false 00:33:38.894 } 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "method": "sock_impl_set_options", 00:33:38.894 "params": { 00:33:38.894 "impl_name": "posix", 00:33:38.894 "recv_buf_size": 2097152, 00:33:38.894 "send_buf_size": 2097152, 00:33:38.894 "enable_recv_pipe": true, 00:33:38.894 "enable_quickack": false, 00:33:38.894 "enable_placement_id": 0, 00:33:38.894 "enable_zerocopy_send_server": true, 00:33:38.894 "enable_zerocopy_send_client": false, 00:33:38.894 "zerocopy_threshold": 0, 00:33:38.894 "tls_version": 0, 00:33:38.894 "enable_ktls": false 00:33:38.894 } 00:33:38.894 } 00:33:38.894 ] 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "subsystem": "vmd", 00:33:38.894 "config": [] 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "subsystem": "accel", 00:33:38.894 "config": [ 00:33:38.894 { 00:33:38.894 "method": "accel_set_options", 00:33:38.894 "params": { 00:33:38.894 "small_cache_size": 128, 00:33:38.894 "large_cache_size": 16, 00:33:38.894 "task_count": 2048, 00:33:38.894 "sequence_count": 2048, 00:33:38.894 "buf_count": 2048 00:33:38.894 } 00:33:38.894 } 00:33:38.894 ] 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "subsystem": "bdev", 00:33:38.894 "config": [ 00:33:38.894 { 00:33:38.894 "method": "bdev_set_options", 00:33:38.894 "params": { 00:33:38.894 "bdev_io_pool_size": 65535, 00:33:38.894 "bdev_io_cache_size": 256, 00:33:38.894 "bdev_auto_examine": true, 00:33:38.894 "iobuf_small_cache_size": 128, 00:33:38.894 "iobuf_large_cache_size": 16 00:33:38.894 } 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "method": "bdev_raid_set_options", 00:33:38.894 "params": { 00:33:38.894 "process_window_size_kb": 1024, 00:33:38.894 "process_max_bandwidth_mb_sec": 0 00:33:38.894 } 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "method": "bdev_iscsi_set_options", 00:33:38.894 "params": { 00:33:38.894 "timeout_sec": 30 00:33:38.894 } 00:33:38.894 }, 00:33:38.894 { 00:33:38.894 "method": "bdev_nvme_set_options", 00:33:38.894 "params": { 00:33:38.894 "action_on_timeout": "none", 00:33:38.894 "timeout_us": 0, 00:33:38.894 "timeout_admin_us": 0, 00:33:38.894 "keep_alive_timeout_ms": 10000, 00:33:38.894 "arbitration_burst": 0, 00:33:38.894 "low_priority_weight": 0, 00:33:38.894 "medium_priority_weight": 0, 00:33:38.894 "high_priority_weight": 0, 00:33:38.894 "nvme_adminq_poll_period_us": 10000, 00:33:38.894 "nvme_ioq_poll_period_us": 0, 00:33:38.894 "io_queue_requests": 512, 00:33:38.894 "delay_cmd_submit": true, 00:33:38.894 "transport_retry_count": 4, 00:33:38.894 "bdev_retry_count": 3, 00:33:38.894 "transport_ack_timeout": 0, 00:33:38.894 "ctrlr_loss_timeout_sec": 0, 00:33:38.895 "reconnect_delay_sec": 0, 00:33:38.895 "fast_io_fail_timeout_sec": 0, 00:33:38.895 "disable_auto_failback": false, 00:33:38.895 "generate_uuids": false, 00:33:38.895 "transport_tos": 0, 00:33:38.895 "nvme_error_stat": false, 00:33:38.895 "rdma_srq_size": 0, 00:33:38.895 "io_path_stat": false, 00:33:38.895 "allow_accel_sequence": false, 00:33:38.895 "rdma_max_cq_size": 0, 00:33:38.895 "rdma_cm_event_timeout_ms": 0, 00:33:38.895 "dhchap_digests": [ 00:33:38.895 "sha256", 00:33:38.895 "sha384", 00:33:38.895 "sha512" 00:33:38.895 ], 00:33:38.895 "dhchap_dhgroups": [ 00:33:38.895 "null", 00:33:38.895 "ffdhe2048", 00:33:38.895 "ffdhe3072", 00:33:38.895 "ffdhe4096", 00:33:38.895 "ffdhe6144", 00:33:38.895 "ffdhe8192" 00:33:38.895 ] 00:33:38.895 } 00:33:38.895 }, 00:33:38.895 { 00:33:38.895 "method": "bdev_nvme_attach_controller", 00:33:38.895 "params": { 00:33:38.895 "name": "nvme0", 00:33:38.895 "trtype": "TCP", 00:33:38.895 "adrfam": "IPv4", 00:33:38.895 "traddr": "127.0.0.1", 00:33:38.895 "trsvcid": "4420", 00:33:38.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:38.895 "prchk_reftag": false, 00:33:38.895 "prchk_guard": false, 00:33:38.895 "ctrlr_loss_timeout_sec": 0, 00:33:38.895 "reconnect_delay_sec": 0, 00:33:38.895 "fast_io_fail_timeout_sec": 0, 00:33:38.895 "psk": "key0", 00:33:38.895 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:38.895 "hdgst": false, 00:33:38.895 "ddgst": false, 00:33:38.895 "multipath": "multipath" 00:33:38.895 } 00:33:38.895 }, 00:33:38.895 { 00:33:38.895 "method": "bdev_nvme_set_hotplug", 00:33:38.895 "params": { 00:33:38.895 "period_us": 100000, 00:33:38.895 "enable": false 00:33:38.895 } 00:33:38.895 }, 00:33:38.895 { 00:33:38.895 "method": "bdev_wait_for_examine" 00:33:38.895 } 00:33:38.895 ] 00:33:38.895 }, 00:33:38.895 { 00:33:38.895 "subsystem": "nbd", 00:33:38.895 "config": [] 00:33:38.895 } 00:33:38.895 ] 00:33:38.895 }' 00:33:38.895 18:40:57 keyring_file -- keyring/file.sh@115 -- # killprocess 1439964 00:33:38.895 18:40:57 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1439964 ']' 00:33:38.895 18:40:57 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1439964 00:33:38.895 18:40:57 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:38.895 18:40:57 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:38.895 18:40:57 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1439964 00:33:38.895 18:40:57 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:38.895 18:40:57 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:38.895 18:40:57 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1439964' 00:33:38.895 killing process with pid 1439964 00:33:38.895 18:40:57 keyring_file -- common/autotest_common.sh@969 -- # kill 1439964 00:33:38.895 Received shutdown signal, test time was about 1.000000 seconds 00:33:38.895 00:33:38.895 Latency(us) 00:33:38.895 [2024-10-08T16:40:57.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.895 [2024-10-08T16:40:57.182Z] =================================================================================================================== 00:33:38.895 [2024-10-08T16:40:57.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:38.895 18:40:57 keyring_file -- common/autotest_common.sh@974 -- # wait 1439964 00:33:39.153 18:40:57 keyring_file -- keyring/file.sh@118 -- # bperfpid=1441442 00:33:39.153 18:40:57 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1441442 /var/tmp/bperf.sock 00:33:39.153 18:40:57 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1441442 ']' 00:33:39.153 18:40:57 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:39.153 18:40:57 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:39.153 18:40:57 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:39.153 18:40:57 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:33:39.153 "subsystems": [ 00:33:39.153 { 00:33:39.153 "subsystem": "keyring", 00:33:39.153 "config": [ 00:33:39.153 { 00:33:39.153 "method": "keyring_file_add_key", 00:33:39.153 "params": { 00:33:39.153 "name": "key0", 00:33:39.153 "path": "/tmp/tmp.DPg6x2Rv8M" 00:33:39.153 } 00:33:39.153 }, 00:33:39.153 { 00:33:39.153 "method": "keyring_file_add_key", 00:33:39.153 "params": { 00:33:39.153 "name": "key1", 00:33:39.153 "path": "/tmp/tmp.ZFekpjKyyk" 00:33:39.153 } 00:33:39.153 } 00:33:39.153 ] 00:33:39.153 }, 00:33:39.153 { 00:33:39.153 "subsystem": "iobuf", 00:33:39.153 "config": [ 00:33:39.153 { 00:33:39.153 "method": "iobuf_set_options", 00:33:39.153 "params": { 00:33:39.153 "small_pool_count": 8192, 00:33:39.153 "large_pool_count": 1024, 00:33:39.153 "small_bufsize": 8192, 00:33:39.153 "large_bufsize": 135168 00:33:39.153 } 00:33:39.153 } 00:33:39.153 ] 00:33:39.153 }, 00:33:39.153 { 00:33:39.153 "subsystem": "sock", 00:33:39.153 "config": [ 00:33:39.153 { 00:33:39.153 "method": "sock_set_default_impl", 00:33:39.153 "params": { 00:33:39.153 "impl_name": "posix" 00:33:39.153 } 00:33:39.153 }, 00:33:39.153 { 00:33:39.153 "method": "sock_impl_set_options", 00:33:39.153 "params": { 00:33:39.153 "impl_name": "ssl", 00:33:39.153 "recv_buf_size": 4096, 00:33:39.153 "send_buf_size": 4096, 00:33:39.153 "enable_recv_pipe": true, 00:33:39.153 "enable_quickack": false, 00:33:39.153 "enable_placement_id": 0, 00:33:39.153 "enable_zerocopy_send_server": true, 00:33:39.153 "enable_zerocopy_send_client": false, 00:33:39.153 "zerocopy_threshold": 0, 00:33:39.153 "tls_version": 0, 00:33:39.153 "enable_ktls": false 00:33:39.153 } 00:33:39.153 }, 00:33:39.153 { 00:33:39.153 "method": "sock_impl_set_options", 00:33:39.153 "params": { 00:33:39.153 "impl_name": "posix", 00:33:39.153 "recv_buf_size": 2097152, 00:33:39.153 "send_buf_size": 2097152, 00:33:39.153 "enable_recv_pipe": true, 00:33:39.153 "enable_quickack": false, 00:33:39.153 "enable_placement_id": 0, 00:33:39.153 "enable_zerocopy_send_server": true, 00:33:39.153 "enable_zerocopy_send_client": false, 00:33:39.153 "zerocopy_threshold": 0, 00:33:39.153 "tls_version": 0, 00:33:39.153 "enable_ktls": false 00:33:39.153 } 00:33:39.153 } 00:33:39.153 ] 00:33:39.153 }, 00:33:39.153 { 00:33:39.153 "subsystem": "vmd", 00:33:39.153 "config": [] 00:33:39.153 }, 00:33:39.153 { 00:33:39.153 "subsystem": "accel", 00:33:39.153 "config": [ 00:33:39.153 { 00:33:39.153 "method": "accel_set_options", 00:33:39.153 "params": { 00:33:39.153 "small_cache_size": 128, 00:33:39.153 "large_cache_size": 16, 00:33:39.153 "task_count": 2048, 00:33:39.153 "sequence_count": 2048, 00:33:39.153 "buf_count": 2048 00:33:39.153 } 00:33:39.153 } 00:33:39.153 ] 00:33:39.153 }, 00:33:39.153 { 00:33:39.153 "subsystem": "bdev", 00:33:39.153 "config": [ 00:33:39.153 { 00:33:39.153 "method": "bdev_set_options", 00:33:39.153 "params": { 00:33:39.154 "bdev_io_pool_size": 65535, 00:33:39.154 "bdev_io_cache_size": 256, 00:33:39.154 "bdev_auto_examine": true, 00:33:39.154 "iobuf_small_cache_size": 128, 00:33:39.154 "iobuf_large_cache_size": 16 00:33:39.154 } 00:33:39.154 }, 00:33:39.154 { 00:33:39.154 "method": "bdev_raid_set_options", 00:33:39.154 "params": { 00:33:39.154 "process_window_size_kb": 1024, 00:33:39.154 "process_max_bandwidth_mb_sec": 0 00:33:39.154 } 00:33:39.154 }, 00:33:39.154 { 00:33:39.154 "method": "bdev_iscsi_set_options", 00:33:39.154 "params": { 00:33:39.154 "timeout_sec": 30 00:33:39.154 } 00:33:39.154 }, 00:33:39.154 { 00:33:39.154 "method": "bdev_nvme_set_options", 00:33:39.154 "params": { 00:33:39.154 "action_on_timeout": "none", 00:33:39.154 "timeout_us": 0, 00:33:39.154 "timeout_admin_us": 0, 00:33:39.154 "keep_alive_timeout_ms": 10000, 00:33:39.154 "arbitration_burst": 0, 00:33:39.154 "low_priority_weight": 0, 00:33:39.154 "medium_priority_weight": 0, 00:33:39.154 "high_priority_weight": 0, 00:33:39.154 "nvme_adminq_poll_period_us": 10000, 00:33:39.154 "nvme_ioq_poll_period_us": 0, 00:33:39.154 "io_queue_requests": 512, 00:33:39.154 "delay_cmd_submit": true, 00:33:39.154 "transport_retry_count": 4, 00:33:39.154 "bdev_retry_count": 3, 00:33:39.154 "transport_ack_timeout": 0, 00:33:39.154 "ctrlr_loss_timeout_sec": 0, 00:33:39.154 "reconnect_delay_sec": 0, 00:33:39.154 "fast_io_fail_timeout_sec": 0, 00:33:39.154 "disable_auto_failback": false, 00:33:39.154 "generate_uuids": false, 00:33:39.154 "transport_tos": 0, 00:33:39.154 "nvme_error_stat": false, 00:33:39.154 "rdma_srq_size": 0, 00:33:39.154 "io_path_stat": false, 00:33:39.154 "allow_accel_sequence": false, 00:33:39.154 "rdma_max_cq_size": 0, 00:33:39.154 "rdma_cm_event_timeout_ms": 0, 00:33:39.154 "dhchap_digests": [ 00:33:39.154 "sha256", 00:33:39.154 "sha384", 00:33:39.154 "sha512" 00:33:39.154 ], 00:33:39.154 "dhchap_dhgroups": [ 00:33:39.154 "null", 00:33:39.154 "ffdhe2048", 00:33:39.154 "ffdhe3072", 00:33:39.154 "ffdhe4096", 00:33:39.154 "ffdhe6144", 00:33:39.154 "ffdhe8192" 00:33:39.154 ] 00:33:39.154 } 00:33:39.154 }, 00:33:39.154 { 00:33:39.154 "method": "bdev_nvme_attach_controller", 00:33:39.154 "params": { 00:33:39.154 "name": "nvme0", 00:33:39.154 "trtype": "TCP", 00:33:39.154 "adrfam": "IPv4", 00:33:39.154 "traddr": "127.0.0.1", 00:33:39.154 "trsvcid": "4420", 00:33:39.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:39.154 "prchk_reftag": false, 00:33:39.154 "prchk_guard": false, 00:33:39.154 "ctrlr_loss_timeout_sec": 0, 00:33:39.154 "reconnect_delay_sec": 0, 00:33:39.154 "fast_io_fail_timeout_sec": 0, 00:33:39.154 "psk": "key0", 00:33:39.154 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:39.154 "hdgst": false, 00:33:39.154 "ddgst": false, 00:33:39.154 "multipath": "multipath" 00:33:39.154 } 00:33:39.154 }, 00:33:39.154 { 00:33:39.154 "method": "bdev_nvme_set_hotplug", 00:33:39.154 "params": { 00:33:39.154 "period_us": 100000, 00:33:39.154 "enable": false 00:33:39.154 } 00:33:39.154 }, 00:33:39.154 { 00:33:39.154 "method": "bdev_wait_for_examine" 00:33:39.154 } 00:33:39.154 ] 00:33:39.154 }, 00:33:39.154 { 00:33:39.154 "subsystem": "nbd", 00:33:39.154 "config": [] 00:33:39.154 } 00:33:39.154 ] 00:33:39.154 }' 00:33:39.154 18:40:57 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:39.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:39.154 18:40:57 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:39.154 18:40:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:39.154 [2024-10-08 18:40:57.360894] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:33:39.154 [2024-10-08 18:40:57.360992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441442 ] 00:33:39.154 [2024-10-08 18:40:57.423186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.412 [2024-10-08 18:40:57.535854] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.669 [2024-10-08 18:40:57.730415] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:40.236 18:40:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:40.236 18:40:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:33:40.236 18:40:58 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:33:40.236 18:40:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.236 18:40:58 keyring_file -- keyring/file.sh@121 -- # jq length 00:33:40.493 18:40:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:40.493 18:40:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:33:40.493 18:40:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:40.493 18:40:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.493 18:40:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.493 18:40:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:40.493 18:40:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:40.751 18:40:58 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:33:40.751 18:40:58 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:33:40.751 18:40:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:40.751 18:40:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:40.751 18:40:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:40.751 18:40:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:40.751 18:40:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:41.010 18:40:59 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:33:41.010 18:40:59 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:33:41.010 18:40:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:41.010 18:40:59 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:33:41.268 18:40:59 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:33:41.268 18:40:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:41.268 18:40:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.DPg6x2Rv8M /tmp/tmp.ZFekpjKyyk 00:33:41.268 18:40:59 keyring_file -- keyring/file.sh@20 -- # killprocess 1441442 00:33:41.268 18:40:59 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1441442 ']' 00:33:41.268 18:40:59 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1441442 00:33:41.268 18:40:59 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:41.268 18:40:59 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:41.268 18:40:59 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1441442 00:33:41.268 18:40:59 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:41.268 18:40:59 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:41.268 18:40:59 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1441442' 00:33:41.268 killing process with pid 1441442 00:33:41.268 18:40:59 keyring_file -- common/autotest_common.sh@969 -- # kill 1441442 00:33:41.268 Received shutdown signal, test time was about 1.000000 seconds 00:33:41.268 00:33:41.268 Latency(us) 00:33:41.268 [2024-10-08T16:40:59.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.268 [2024-10-08T16:40:59.555Z] =================================================================================================================== 00:33:41.268 [2024-10-08T16:40:59.555Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:41.268 18:40:59 keyring_file -- common/autotest_common.sh@974 -- # wait 1441442 00:33:41.525 18:40:59 keyring_file -- keyring/file.sh@21 -- # killprocess 1439954 00:33:41.525 18:40:59 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1439954 ']' 00:33:41.525 18:40:59 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1439954 00:33:41.525 18:40:59 keyring_file -- common/autotest_common.sh@955 -- # uname 00:33:41.525 18:40:59 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:41.525 18:40:59 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1439954 00:33:41.525 18:40:59 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:41.525 18:40:59 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:41.525 18:40:59 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1439954' 00:33:41.525 killing process with pid 1439954 00:33:41.525 18:40:59 keyring_file -- common/autotest_common.sh@969 -- # kill 1439954 00:33:41.525 18:40:59 keyring_file -- common/autotest_common.sh@974 -- # wait 1439954 00:33:42.089 00:33:42.089 real 0m15.331s 00:33:42.089 user 0m38.668s 00:33:42.089 sys 0m3.310s 00:33:42.089 18:41:00 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:42.089 18:41:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:42.089 ************************************ 00:33:42.089 END TEST keyring_file 00:33:42.089 ************************************ 00:33:42.089 18:41:00 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:33:42.089 18:41:00 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:42.089 18:41:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:42.089 18:41:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:42.089 18:41:00 -- common/autotest_common.sh@10 -- # set +x 00:33:42.089 ************************************ 00:33:42.089 START TEST keyring_linux 00:33:42.089 ************************************ 00:33:42.089 18:41:00 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:42.089 Joined session keyring: 93610726 00:33:42.089 * Looking for test storage... 00:33:42.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:42.089 18:41:00 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:33:42.089 18:41:00 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:33:42.089 18:41:00 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:33:42.347 18:41:00 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@345 -- # : 1 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:42.347 18:41:00 keyring_linux -- scripts/common.sh@368 -- # return 0 00:33:42.347 18:41:00 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:42.347 18:41:00 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:33:42.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.347 --rc genhtml_branch_coverage=1 00:33:42.347 --rc genhtml_function_coverage=1 00:33:42.347 --rc genhtml_legend=1 00:33:42.347 --rc geninfo_all_blocks=1 00:33:42.347 --rc geninfo_unexecuted_blocks=1 00:33:42.347 00:33:42.347 ' 00:33:42.347 18:41:00 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:33:42.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.347 --rc genhtml_branch_coverage=1 00:33:42.347 --rc genhtml_function_coverage=1 00:33:42.347 --rc genhtml_legend=1 00:33:42.347 --rc geninfo_all_blocks=1 00:33:42.347 --rc geninfo_unexecuted_blocks=1 00:33:42.347 00:33:42.347 ' 00:33:42.347 18:41:00 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:33:42.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.347 --rc genhtml_branch_coverage=1 00:33:42.347 --rc genhtml_function_coverage=1 00:33:42.347 --rc genhtml_legend=1 00:33:42.347 --rc geninfo_all_blocks=1 00:33:42.347 --rc geninfo_unexecuted_blocks=1 00:33:42.347 00:33:42.348 ' 00:33:42.348 18:41:00 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:33:42.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:42.348 --rc genhtml_branch_coverage=1 00:33:42.348 --rc genhtml_function_coverage=1 00:33:42.348 --rc genhtml_legend=1 00:33:42.348 --rc geninfo_all_blocks=1 00:33:42.348 --rc geninfo_unexecuted_blocks=1 00:33:42.348 00:33:42.348 ' 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:42.348 18:41:00 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:33:42.348 18:41:00 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:42.348 18:41:00 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:42.348 18:41:00 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:42.348 18:41:00 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.348 18:41:00 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.348 18:41:00 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.348 18:41:00 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:42.348 18:41:00 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:42.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@731 -- # python - 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:42.348 /tmp/:spdk-test:key0 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:33:42.348 18:41:00 keyring_linux -- nvmf/common.sh@731 -- # python - 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:42.348 18:41:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:42.348 /tmp/:spdk-test:key1 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1441921 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:42.348 18:41:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1441921 00:33:42.348 18:41:00 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1441921 ']' 00:33:42.348 18:41:00 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.348 18:41:00 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:42.348 18:41:00 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.348 18:41:00 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:42.348 18:41:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:42.348 [2024-10-08 18:41:00.543974] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:33:42.348 [2024-10-08 18:41:00.544070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441921 ] 00:33:42.348 [2024-10-08 18:41:00.604162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.605 [2024-10-08 18:41:00.719248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.863 18:41:00 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:42.863 18:41:00 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:42.863 18:41:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:42.863 18:41:00 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.863 18:41:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:42.863 [2024-10-08 18:41:00.995745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.863 null0 00:33:42.863 [2024-10-08 18:41:01.027786] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:42.863 [2024-10-08 18:41:01.028302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:42.863 18:41:01 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.863 18:41:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:42.863 275401678 00:33:42.863 18:41:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:42.863 897546654 00:33:42.863 18:41:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1442057 00:33:42.863 18:41:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:42.863 18:41:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1442057 /var/tmp/bperf.sock 00:33:42.863 18:41:01 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1442057 ']' 00:33:42.863 18:41:01 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:42.863 18:41:01 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:42.863 18:41:01 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:42.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:42.863 18:41:01 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:42.863 18:41:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:42.863 [2024-10-08 18:41:01.092796] Starting SPDK v25.01-pre git sha1 fdd35b2ab / DPDK 24.03.0 initialization... 00:33:42.863 [2024-10-08 18:41:01.092860] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442057 ] 00:33:42.863 [2024-10-08 18:41:01.148915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.120 [2024-10-08 18:41:01.256043] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.120 18:41:01 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:43.120 18:41:01 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:33:43.120 18:41:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:43.120 18:41:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:43.377 18:41:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:43.377 18:41:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:43.942 18:41:01 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:43.942 18:41:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:43.942 [2024-10-08 18:41:02.190243] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:44.199 nvme0n1 00:33:44.199 18:41:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:44.199 18:41:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:44.199 18:41:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:44.199 18:41:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:44.199 18:41:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:44.199 18:41:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.456 18:41:02 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:44.456 18:41:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:44.457 18:41:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:44.457 18:41:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:44.457 18:41:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:44.457 18:41:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:44.457 18:41:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:44.714 18:41:02 keyring_linux -- keyring/linux.sh@25 -- # sn=275401678 00:33:44.714 18:41:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:44.714 18:41:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:44.714 18:41:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 275401678 == \2\7\5\4\0\1\6\7\8 ]] 00:33:44.714 18:41:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 275401678 00:33:44.714 18:41:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:44.714 18:41:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:44.714 Running I/O for 1 seconds... 00:33:45.904 11481.00 IOPS, 44.85 MiB/s 00:33:45.904 Latency(us) 00:33:45.904 [2024-10-08T16:41:04.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.904 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:45.904 nvme0n1 : 1.01 11481.61 44.85 0.00 0.00 11079.61 3252.53 14660.65 00:33:45.904 [2024-10-08T16:41:04.191Z] =================================================================================================================== 00:33:45.904 [2024-10-08T16:41:04.191Z] Total : 11481.61 44.85 0.00 0.00 11079.61 3252.53 14660.65 00:33:45.904 { 00:33:45.904 "results": [ 00:33:45.904 { 00:33:45.904 "job": "nvme0n1", 00:33:45.904 "core_mask": "0x2", 00:33:45.904 "workload": "randread", 00:33:45.904 "status": "finished", 00:33:45.904 "queue_depth": 128, 00:33:45.904 "io_size": 4096, 00:33:45.904 "runtime": 1.011182, 00:33:45.904 "iops": 11481.612607819365, 00:33:45.904 "mibps": 44.85004924929439, 00:33:45.904 "io_failed": 0, 00:33:45.904 "io_timeout": 0, 00:33:45.904 "avg_latency_us": 11079.605389479057, 00:33:45.904 "min_latency_us": 3252.5274074074073, 00:33:45.904 "max_latency_us": 14660.645925925926 00:33:45.904 } 00:33:45.904 ], 00:33:45.904 "core_count": 1 00:33:45.904 } 00:33:45.904 18:41:03 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:45.904 18:41:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:46.162 18:41:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:46.162 18:41:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:46.162 18:41:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:46.162 18:41:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:46.162 18:41:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:46.163 18:41:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:46.478 18:41:04 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:46.478 18:41:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:46.478 18:41:04 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:46.478 18:41:04 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:46.478 18:41:04 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:33:46.478 18:41:04 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:46.478 18:41:04 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:33:46.478 18:41:04 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:46.478 18:41:04 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:33:46.478 18:41:04 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:46.478 18:41:04 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:46.478 18:41:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:46.478 [2024-10-08 18:41:04.751903] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:46.478 [2024-10-08 18:41:04.752440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafb160 (107): Transport endpoint is not connected 00:33:46.478 [2024-10-08 18:41:04.753423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xafb160 (9): Bad file descriptor 00:33:46.478 [2024-10-08 18:41:04.754422] nvme_ctrlr.c:4224:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:46.478 [2024-10-08 18:41:04.754447] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:46.478 [2024-10-08 18:41:04.754462] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:33:46.478 [2024-10-08 18:41:04.754487] nvme_ctrlr.c:1126:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:46.736 request: 00:33:46.736 { 00:33:46.736 "name": "nvme0", 00:33:46.736 "trtype": "tcp", 00:33:46.736 "traddr": "127.0.0.1", 00:33:46.736 "adrfam": "ipv4", 00:33:46.736 "trsvcid": "4420", 00:33:46.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:46.736 "prchk_reftag": false, 00:33:46.736 "prchk_guard": false, 00:33:46.736 "hdgst": false, 00:33:46.736 "ddgst": false, 00:33:46.736 "psk": ":spdk-test:key1", 00:33:46.736 "allow_unrecognized_csi": false, 00:33:46.736 "method": "bdev_nvme_attach_controller", 00:33:46.736 "req_id": 1 00:33:46.736 } 00:33:46.736 Got JSON-RPC error response 00:33:46.736 response: 00:33:46.736 { 00:33:46.736 "code": -5, 00:33:46.736 "message": "Input/output error" 00:33:46.736 } 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@33 -- # sn=275401678 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 275401678 00:33:46.736 1 links removed 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@33 -- # sn=897546654 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 897546654 00:33:46.736 1 links removed 00:33:46.736 18:41:04 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1442057 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1442057 ']' 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1442057 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1442057 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1442057' 00:33:46.736 killing process with pid 1442057 00:33:46.736 18:41:04 keyring_linux -- common/autotest_common.sh@969 -- # kill 1442057 00:33:46.736 Received shutdown signal, test time was about 1.000000 seconds 00:33:46.736 00:33:46.737 Latency(us) 00:33:46.737 [2024-10-08T16:41:05.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.737 [2024-10-08T16:41:05.024Z] =================================================================================================================== 00:33:46.737 [2024-10-08T16:41:05.024Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.737 18:41:04 keyring_linux -- common/autotest_common.sh@974 -- # wait 1442057 00:33:46.994 18:41:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1441921 00:33:46.994 18:41:05 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1441921 ']' 00:33:46.994 18:41:05 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1441921 00:33:46.994 18:41:05 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:33:46.994 18:41:05 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:46.994 18:41:05 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1441921 00:33:46.994 18:41:05 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:46.994 18:41:05 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:46.994 18:41:05 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1441921' 00:33:46.994 killing process with pid 1441921 00:33:46.994 18:41:05 keyring_linux -- common/autotest_common.sh@969 -- # kill 1441921 00:33:46.994 18:41:05 keyring_linux -- common/autotest_common.sh@974 -- # wait 1441921 00:33:47.560 00:33:47.560 real 0m5.347s 00:33:47.560 user 0m10.475s 00:33:47.560 sys 0m1.633s 00:33:47.560 18:41:05 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:47.560 18:41:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:47.560 ************************************ 00:33:47.560 END TEST keyring_linux 00:33:47.560 ************************************ 00:33:47.560 18:41:05 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:47.560 18:41:05 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:33:47.560 18:41:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:47.560 18:41:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:47.560 18:41:05 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:33:47.560 18:41:05 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:33:47.560 18:41:05 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:33:47.560 18:41:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:47.560 18:41:05 -- common/autotest_common.sh@10 -- # set +x 00:33:47.560 18:41:05 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:33:47.560 18:41:05 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:47.560 18:41:05 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:47.560 18:41:05 -- common/autotest_common.sh@10 -- # set +x 00:33:49.461 INFO: APP EXITING 00:33:49.461 INFO: killing all VMs 00:33:49.461 INFO: killing vhost app 00:33:49.461 INFO: EXIT DONE 00:33:50.397 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:33:50.397 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:33:50.397 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:33:50.397 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:33:50.397 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:33:50.397 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:33:50.655 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:33:50.655 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:33:50.655 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:33:50.655 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:33:50.655 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:33:50.655 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:33:50.655 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:33:50.655 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:33:50.655 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:33:50.655 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:33:50.655 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:52.030 Cleaning 00:33:52.030 Removing: /var/run/dpdk/spdk0/config 00:33:52.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:52.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:52.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:52.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:52.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:52.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:52.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:52.030 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:52.030 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:52.030 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:52.030 Removing: /var/run/dpdk/spdk1/config 00:33:52.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:52.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:52.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:52.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:52.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:52.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:52.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:52.030 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:52.030 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:52.030 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:52.030 Removing: /var/run/dpdk/spdk2/config 00:33:52.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:52.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:52.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:52.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:52.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:52.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:52.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:52.030 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:52.030 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:52.030 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:52.030 Removing: /var/run/dpdk/spdk3/config 00:33:52.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:52.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:52.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:52.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:52.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:52.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:52.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:52.030 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:52.030 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:52.030 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:52.030 Removing: /var/run/dpdk/spdk4/config 00:33:52.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:52.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:52.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:52.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:52.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:52.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:52.030 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:52.031 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:52.031 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:52.031 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:52.031 Removing: /dev/shm/bdev_svc_trace.1 00:33:52.031 Removing: /dev/shm/nvmf_trace.0 00:33:52.031 Removing: /dev/shm/spdk_tgt_trace.pid1121129 00:33:52.031 Removing: /var/run/dpdk/spdk0 00:33:52.031 Removing: /var/run/dpdk/spdk1 00:33:52.031 Removing: /var/run/dpdk/spdk2 00:33:52.031 Removing: /var/run/dpdk/spdk3 00:33:52.031 Removing: /var/run/dpdk/spdk4 00:33:52.031 Removing: /var/run/dpdk/spdk_pid1119449 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1120191 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1121129 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1121496 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1122271 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1122443 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1123639 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1123756 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1124020 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1125341 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1126316 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1126593 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1126794 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1127119 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1127330 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1127489 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1127641 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1127950 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1128286 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1130771 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1130949 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1131221 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1131230 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1131661 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1131672 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1132095 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1132119 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1132397 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1132415 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1132582 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1132702 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1133089 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1133362 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1133563 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1135704 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1138323 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1145453 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1145861 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1148381 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1148663 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1151305 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1155144 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1157860 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1164293 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1169637 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1170848 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1171521 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1182028 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1184447 00:33:52.289 Removing: /var/run/dpdk/spdk_pid1212345 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1215644 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1219481 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1223479 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1223483 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1224021 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1224674 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1225333 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1225734 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1225736 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1225882 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1226009 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1226017 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1226674 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1227444 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1228261 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1228992 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1229009 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1229158 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1230169 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1230897 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1236240 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1264220 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1267252 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1268430 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1269749 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1269893 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1270033 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1270172 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1270623 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1271942 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1272802 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1273233 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1274855 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1275277 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1275843 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1278355 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1282276 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1282277 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1282278 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1284493 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1289342 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1291988 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1295896 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1296849 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1297935 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1299019 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1301849 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1304216 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1308448 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1308571 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1311359 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1311511 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1311743 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1312017 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1312026 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1314788 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1315122 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1317870 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1320391 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1323832 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1327423 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1333915 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1338393 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1338395 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1350799 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1351324 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1351833 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1352372 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1353464 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1353869 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1354280 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1354812 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1357323 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1357466 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1361389 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1361460 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1364822 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1367433 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1374366 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1374763 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1377269 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1377557 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1380061 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1383762 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1386040 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1393041 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1398245 00:33:52.290 Removing: /var/run/dpdk/spdk_pid1399436 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1400093 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1410414 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1412658 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1414668 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1419714 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1419725 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1422739 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1424643 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1426044 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1426900 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1428190 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1429069 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1434471 00:33:52.548 Removing: /var/run/dpdk/spdk_pid1434856 00:33:52.549 Removing: /var/run/dpdk/spdk_pid1435262 00:33:52.549 Removing: /var/run/dpdk/spdk_pid1436828 00:33:52.549 Removing: /var/run/dpdk/spdk_pid1437222 00:33:52.549 Removing: /var/run/dpdk/spdk_pid1437508 00:33:52.549 Removing: /var/run/dpdk/spdk_pid1439954 00:33:52.549 Removing: /var/run/dpdk/spdk_pid1439964 00:33:52.549 Removing: /var/run/dpdk/spdk_pid1441442 00:33:52.549 Removing: /var/run/dpdk/spdk_pid1441921 00:33:52.549 Removing: /var/run/dpdk/spdk_pid1442057 00:33:52.549 Clean 00:33:52.549 18:41:10 -- common/autotest_common.sh@1451 -- # return 0 00:33:52.549 18:41:10 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:33:52.549 18:41:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:52.549 18:41:10 -- common/autotest_common.sh@10 -- # set +x 00:33:52.549 18:41:10 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:33:52.549 18:41:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:52.549 18:41:10 -- common/autotest_common.sh@10 -- # set +x 00:33:52.549 18:41:10 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:52.549 18:41:10 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:52.549 18:41:10 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:52.549 18:41:10 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:33:52.549 18:41:10 -- spdk/autotest.sh@394 -- # hostname 00:33:52.549 18:41:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:52.806 geninfo: WARNING: invalid characters removed from testname! 00:34:24.865 18:41:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:27.389 18:41:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:30.662 18:41:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:33.938 18:41:51 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:36.462 18:41:54 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:39.739 18:41:57 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:43.017 18:42:00 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:43.017 18:42:00 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:34:43.017 18:42:00 -- common/autotest_common.sh@1681 -- $ lcov --version 00:34:43.017 18:42:00 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:34:43.017 18:42:00 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:34:43.017 18:42:00 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:34:43.017 18:42:00 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:34:43.017 18:42:00 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:34:43.017 18:42:00 -- scripts/common.sh@336 -- $ IFS=.-: 00:34:43.017 18:42:00 -- scripts/common.sh@336 -- $ read -ra ver1 00:34:43.017 18:42:00 -- scripts/common.sh@337 -- $ IFS=.-: 00:34:43.017 18:42:00 -- scripts/common.sh@337 -- $ read -ra ver2 00:34:43.017 18:42:00 -- scripts/common.sh@338 -- $ local 'op=<' 00:34:43.017 18:42:00 -- scripts/common.sh@340 -- $ ver1_l=2 00:34:43.017 18:42:00 -- scripts/common.sh@341 -- $ ver2_l=1 00:34:43.017 18:42:00 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:34:43.017 18:42:00 -- scripts/common.sh@344 -- $ case "$op" in 00:34:43.017 18:42:00 -- scripts/common.sh@345 -- $ : 1 00:34:43.017 18:42:00 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:34:43.017 18:42:00 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:43.017 18:42:00 -- scripts/common.sh@365 -- $ decimal 1 00:34:43.017 18:42:00 -- scripts/common.sh@353 -- $ local d=1 00:34:43.017 18:42:00 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:34:43.017 18:42:00 -- scripts/common.sh@355 -- $ echo 1 00:34:43.017 18:42:00 -- scripts/common.sh@365 -- $ ver1[v]=1 00:34:43.017 18:42:00 -- scripts/common.sh@366 -- $ decimal 2 00:34:43.017 18:42:00 -- scripts/common.sh@353 -- $ local d=2 00:34:43.017 18:42:00 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:34:43.017 18:42:00 -- scripts/common.sh@355 -- $ echo 2 00:34:43.017 18:42:00 -- scripts/common.sh@366 -- $ ver2[v]=2 00:34:43.017 18:42:00 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:34:43.017 18:42:00 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:34:43.017 18:42:00 -- scripts/common.sh@368 -- $ return 0 00:34:43.017 18:42:00 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:43.017 18:42:00 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:34:43.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.017 --rc genhtml_branch_coverage=1 00:34:43.017 --rc genhtml_function_coverage=1 00:34:43.017 --rc genhtml_legend=1 00:34:43.017 --rc geninfo_all_blocks=1 00:34:43.017 --rc geninfo_unexecuted_blocks=1 00:34:43.017 00:34:43.017 ' 00:34:43.017 18:42:00 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:34:43.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.017 --rc genhtml_branch_coverage=1 00:34:43.017 --rc genhtml_function_coverage=1 00:34:43.017 --rc genhtml_legend=1 00:34:43.017 --rc geninfo_all_blocks=1 00:34:43.017 --rc geninfo_unexecuted_blocks=1 00:34:43.017 00:34:43.017 ' 00:34:43.017 18:42:00 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:34:43.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.017 --rc genhtml_branch_coverage=1 00:34:43.017 --rc genhtml_function_coverage=1 00:34:43.017 --rc genhtml_legend=1 00:34:43.017 --rc geninfo_all_blocks=1 00:34:43.017 --rc geninfo_unexecuted_blocks=1 00:34:43.017 00:34:43.017 ' 00:34:43.017 18:42:00 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:34:43.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.017 --rc genhtml_branch_coverage=1 00:34:43.017 --rc genhtml_function_coverage=1 00:34:43.017 --rc genhtml_legend=1 00:34:43.017 --rc geninfo_all_blocks=1 00:34:43.017 --rc geninfo_unexecuted_blocks=1 00:34:43.017 00:34:43.017 ' 00:34:43.017 18:42:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.017 18:42:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:34:43.017 18:42:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:43.017 18:42:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.017 18:42:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.017 18:42:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.017 18:42:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.017 18:42:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.017 18:42:00 -- paths/export.sh@5 -- $ export PATH 00:34:43.017 18:42:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.017 18:42:00 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:43.017 18:42:00 -- common/autobuild_common.sh@486 -- $ date +%s 00:34:43.017 18:42:00 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728405720.XXXXXX 00:34:43.017 18:42:00 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728405720.vwqHeM 00:34:43.017 18:42:00 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:34:43.017 18:42:00 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:34:43.017 18:42:00 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:43.017 18:42:00 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:43.017 18:42:00 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:43.017 18:42:00 -- common/autobuild_common.sh@502 -- $ get_config_params 00:34:43.017 18:42:00 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:34:43.017 18:42:00 -- common/autotest_common.sh@10 -- $ set +x 00:34:43.017 18:42:00 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:43.017 18:42:00 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:34:43.017 18:42:00 -- pm/common@17 -- $ local monitor 00:34:43.017 18:42:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:43.017 18:42:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:43.017 18:42:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:43.017 18:42:00 -- pm/common@21 -- $ date +%s 00:34:43.017 18:42:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:43.017 18:42:00 -- pm/common@21 -- $ date +%s 00:34:43.017 18:42:00 -- pm/common@25 -- $ sleep 1 00:34:43.017 18:42:00 -- pm/common@21 -- $ date +%s 00:34:43.017 18:42:00 -- pm/common@21 -- $ date +%s 00:34:43.017 18:42:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405720 00:34:43.017 18:42:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405720 00:34:43.017 18:42:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405720 00:34:43.017 18:42:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728405720 00:34:43.017 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405720_collect-vmstat.pm.log 00:34:43.017 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405720_collect-cpu-load.pm.log 00:34:43.017 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405720_collect-cpu-temp.pm.log 00:34:43.017 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728405720_collect-bmc-pm.bmc.pm.log 00:34:43.955 18:42:01 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:34:43.955 18:42:01 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:34:43.955 18:42:01 -- spdk/autopackage.sh@14 -- $ timing_finish 00:34:43.955 18:42:01 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:43.955 18:42:01 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:43.955 18:42:01 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:43.955 18:42:01 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:43.955 18:42:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:43.955 18:42:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:43.955 18:42:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:43.955 18:42:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:43.955 18:42:01 -- pm/common@44 -- $ pid=1452700 00:34:43.955 18:42:01 -- pm/common@50 -- $ kill -TERM 1452700 00:34:43.955 18:42:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:43.955 18:42:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:43.955 18:42:01 -- pm/common@44 -- $ pid=1452702 00:34:43.955 18:42:01 -- pm/common@50 -- $ kill -TERM 1452702 00:34:43.955 18:42:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:43.955 18:42:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:43.955 18:42:01 -- pm/common@44 -- $ pid=1452704 00:34:43.955 18:42:01 -- pm/common@50 -- $ kill -TERM 1452704 00:34:43.955 18:42:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:43.955 18:42:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:43.955 18:42:01 -- pm/common@44 -- $ pid=1452734 00:34:43.955 18:42:01 -- pm/common@50 -- $ sudo -E kill -TERM 1452734 00:34:43.955 + [[ -n 1049575 ]] 00:34:43.955 + sudo kill 1049575 00:34:43.965 [Pipeline] } 00:34:43.980 [Pipeline] // stage 00:34:43.985 [Pipeline] } 00:34:43.998 [Pipeline] // timeout 00:34:44.003 [Pipeline] } 00:34:44.017 [Pipeline] // catchError 00:34:44.021 [Pipeline] } 00:34:44.035 [Pipeline] // wrap 00:34:44.041 [Pipeline] } 00:34:44.053 [Pipeline] // catchError 00:34:44.062 [Pipeline] stage 00:34:44.065 [Pipeline] { (Epilogue) 00:34:44.077 [Pipeline] catchError 00:34:44.079 [Pipeline] { 00:34:44.091 [Pipeline] echo 00:34:44.093 Cleanup processes 00:34:44.099 [Pipeline] sh 00:34:44.384 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:44.384 1452917 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:44.384 1453069 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:44.398 [Pipeline] sh 00:34:44.682 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:44.682 ++ grep -v 'sudo pgrep' 00:34:44.682 ++ awk '{print $1}' 00:34:44.682 + sudo kill -9 1452917 00:34:44.694 [Pipeline] sh 00:34:44.977 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:54.971 [Pipeline] sh 00:34:55.257 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:55.257 Artifacts sizes are good 00:34:55.271 [Pipeline] archiveArtifacts 00:34:55.278 Archiving artifacts 00:34:55.417 [Pipeline] sh 00:34:55.699 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:55.715 [Pipeline] cleanWs 00:34:55.725 [WS-CLEANUP] Deleting project workspace... 00:34:55.725 [WS-CLEANUP] Deferred wipeout is used... 00:34:55.732 [WS-CLEANUP] done 00:34:55.734 [Pipeline] } 00:34:55.750 [Pipeline] // catchError 00:34:55.761 [Pipeline] sh 00:34:56.041 + logger -p user.info -t JENKINS-CI 00:34:56.049 [Pipeline] } 00:34:56.062 [Pipeline] // stage 00:34:56.066 [Pipeline] } 00:34:56.080 [Pipeline] // node 00:34:56.085 [Pipeline] End of Pipeline 00:34:56.121 Finished: SUCCESS